The week at Retraction Watch featured revelations of fraud in more than $100 million in government research, and swift findings in a much-discussed case. Here’s what was happening elsewhere:
- Which 18 naughty journals were delisted from Thomson Reuters’ Journal Citation Reports for excessive self-citation and citation stacking? (Thomson Reuters)
- Here’s one: Does a journal about homeopathy belong in science? ask our co-founders in STAT.
- “[T]here is no effective route for a whistleblower who has uncovered evidence of dubious behaviour by editors,” concludes Dorothy Bishop. (For some background on the case Bishop discusses, see our previous coverage.)
- Is this misconduct? After questioning another group’s research, a James Cook University professor was censured for “failing to act in a collegial way and in the academic spirit of the institution,” The Australian reports. (sub req’d)
- Elies Bik uncovered inappropriate image duplication in hundreds of papers. Here’s a glimpse at how the pattern recognition part of her brain works.
- “Does our sense of understanding carry any objective signs that we are right?” asks J.D. Trout on truths in science. (The Scientist)
- A bioethics icon had some questionable experiments in his own background. (James P. Rathmell, STAT)
- Here’s why using Google to find journals to publish in may be a bad idea. (Mehdi Dadkhah, The Journal of the Association for Vascular Access)
- “What happens when a study produces evidence that doesn’t support a scientific hypothesis?” asks Neuroskeptic.
- “Cell lines are widely used in biomedical research to understand fundamental biological processes and disease states, yet most researchers do not perform a simple, affordable test to authenticate these key resources.” (Amanda Capes-Davis, Richard Neve, PLOS Biology) More from Almeida et al, also in PLOS Biology.
- What happened when researchers sued other researcher for stealing their ideas? Nancy Sims explores. (College & Research Libraries News)
- ACS Publications wants to hear your peer review stories.
- Health economics research: Just like a five-year-old’s art project. (Dan Gorenstein, Marketplace)
- “With every new report, a wave of weariness washes over me: ‘Really?’ ‘Still?’ my mind cries.” Margaret Wertheim on sexism in science. (Aeon)
- Ban predatory publishers’ papers from the literature, says Jeffrey Beall. (Nature)
- “A semantic confusion is clouding one of the most talked-about issues in research:” Reproducibility. (Monya Baker, Nature)
- “Everyone has the personal story of, ‘You know what? I did all the work and [my colleague] got the first authorship, I think that’s unfair,’” says Philippa Saunders. (BMJ)
- “Can a bunch of doctors keep an $8 billion secret?” asks Michelle Cortez of Bloomberg. “Not on Twitter.” Features our sister blog, Embargo Watch.
- “It’s time to tackle implicit bias in peer review, to ensure that the best science is funded and published,” says Geraldine Richmond. (Live Science)
- “[H]ow does a scientist navigate the co-authorship issue when translating their work beyond their discipline?” asks Manu Saunders.
- Paul Knoepfler looks at how nine publications handled coverage on stem cells and stroke victims. (The Niche)
- “Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).” Atul Gawande on the mistrust of science. (New Yorker)
- Adriana Bankston argues that research ethics must come before accolades. (ASCB)
- A new preprint on arXiv takes a look at what happens when different authors in the same field have the same name.
- What does it mean to be an author when it comes to software code? asks Joris J. van Zundert. (Interdisciplinary Science Reviews)
- What can and can’t p-curve – a way to test the “evidential value of diverse sets of findings” – do? The team at Data Colada explains.
- French authorities have opened an involuntary manslaughter investigation into a drug trial that turned deadly earlier this year. (Reuters)
- We “need to know what is required to better align medical journals and oversight institutions with the public interest,” says Mark Wilson, reviewing the case of Vioxx and the New England Journal of Medicine. (Indian Journal of Medical Ethics)
- Hindawi and Wiley are partnering, making nine of Wiley’s subscription journals open access. (press release)
- A new model hopes to allocate credit for work based on Impact Factor and coauthorship contribution. (Javier E. Contreras-Reyes, arXiv)
- The Tuskegee Syphilis Experiment’s effects may have been even more long-lasting than previously thought, reports Ike Swetlitz in STAT.
- Mike Spagat continues to find problems in survey data from Iraq. (Stats.org; War, Numbers, and Human Losses)
- Biologists are getting serious about preprints, writes Jessica Wright at Spectrum.
- “Memories of unethical actions fade faster,” according to a new study. (Minds for Business)
- “Is there any justification for academic social science?” asks Martyn Hammersley. (LSE Impact Blog)
- “[T]he relatively higher administrative overheads of larger universities become an organizational liability in times of rapid institutional change,” according to a new study in Scientometrics. (sub req’d)
- Former U.S. Office of Research Integrity director Chris Pascal has died. (Legacy.com)
- “Peer review and bibliometric indicators just don’t match up,” according to an analysis of an Italian research evaluation by Alberto Baccini and Giuseppe De Nicolao. (LSE Impact Blog)
- “Self-citation is relatively rare in pediatric journals,” says a new study. (Scientometrics, sub req’d)
- Should researchers take another look at Microsoft Academic Search? Anne-Wil Harzing suggests the answer may be yes. (Scientometrics, sub req’d)
- A researcher boasts about having hundreds of publications, but looks are deceiving, says Jeffrey Beall.
- “How do you spend the majority of your time as a tenured or tenure-track professor in the natural sciences?” A survey by Alexandra Wright and colleagues tried to find out. (PLOS Blogs) Politics in scholarly publishing has been with us for a long time, says Anna Gielas. (PLOS Blogs)
- Why one researcher spent $1 million of his own money on gun violence research. (Caleb Lewis, Vox)
- Juice companies “game science to perpetuate the myth that cranberry prevents [urinary tract infections],” says Julia Belluz. (Vox)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
Regarding, “Which 18 naughty journals were delisted from Thomson Reuters’ Journal Citation Reports for excessive self-citation and citation stacking? :
It might be worthwhile for RW to dig a bit deeper than TR’s announcements into why journals lose their journal impact factor. The delisting of Springer’s Earth and Environmental Sciences (formerly Environmental Geology) caught my eye as my colleagues have published in it, and it has been around a long time as one of the many mid-tier, niche journals that publish respectable but not highly selective works. It’s last JIR was 1.7 according to the journal.
I dug into it in Scopus, and there are indeed some anomalies, but it’s not obvious that anomalous = naughty. In the early 2000s as Environmental Geology they published about 300 articles a year, but it took off from 297 articles in 2010 (first year as Earth Environ Sci) to 1137 in 2015. Total citations shot up by a factor of 10 during that period. Self-citation rates were indeed high: From 2011 through 2015 self-citation rates were 11, 29, 29, 40, and 35% respectively per year. For comparison, rates at two other journals in the field looked to be under 10% (JAWRA and Hydrology and Earth System Sciences). I glanced through the most highly cited article published in 2013, on landslide susceptibility in Korea, with 53 citations in Scopus. Indeed many of the citing articles were self-citations to the same journal, but they all seemed to be relevant to the topic.
Because TR calculates the 2-year JIR with an arithmetic mean rather than a median, a single highly cited article can proportionally move a low JIR value quite a bit. Maybe this is a gotcha and there is indeed a heavy-handed company editor behind the scenes twisting authors’ arms to boost citations, but maybe it’s just that the journal is influential within its niche and authors read and cite others in their niche. Looking back at the most recently published 25 articles, only two were from authors in Western Europe or North America. Chinese authors were strongly represented. Certainly editors can misbehave, but this suggests that if TR is delisting solely for statistical anomalies without digging deeper, if a community of authors are selectively reading and publishing in a niche journal there is a risk of discrimination against this community.
I cannot support the delisting of legitimate journals on the altar of the Holy Church of Impact Factor. Who decides how many journal self-citations is too many? 70%? 60%? 20%? Are the publishers who maintain these citation indices (and who also control the journals themselves) accountable to anyone for their arbitrary decisions?
I have some concerns about the methodology employed by Dadkhah regarding the use of Google to find journals. While I agree that this is likely a poor technique for locating quality publishers, I question why he chose the 19 search phrases that he did. The overwhelming majority of his search phrases included some combination of fast/easy publication/review. Is there any evidence showing that authors actually use these phrases when searching Google? IMO, if you’re searching for a journal based on these criteria, then you’re all but asking to find a venue with potentially unsavory publishing ethics.
Agree with Anonymous above. In addition, who is “we”/”our”, considering there is only one author?
“We used Beall’s list and Jalalian’s hijacked journals list to detect questionable journals. In our opinion, the best ways to select a suitable journal for publishing your research are
to use journal finders from publishers or do a search in scientific sites, such as Web of Science or Scopus.”