Weekend reads: Death of a cancer lab; women economists’ papers are more readable; self-correction grows
The week at Retraction Watch featured a study of why researchers commit misconduct, and the story of former Northwestern scientist who sued the university for defamation. Here’s what was happening elsewhere:
- A once-thriving cancer research lab was driven to a slow death from mismanagement and fraud allegations after drawing the attention of two men presenting themselves as wealthy benefactors. (Jack Sullivan, CommonWealth)
- Female economists write more readable papers than their male peers — but it takes them on average six months longer to be published. (John Elmes, Times Higher Education)
- Aggravated by the tedious processes involved in correcting a paper, scientists are turning to PubMed Commons to get the word out. (Our co-founders, STAT)
- “The drive for eminence is inherently at odds with scientific values, and insufficient attention to this problem is partly responsible for the recent crisis of confidence in psychology and other sciences,” argues Simine Vazire. (PsyArXiv) And here’s the backstory about that piece.
- With “new system, scientists never have to write a grant application again,” reports Jop de Vrieze. (Science) Read our interview from last year with one of the architects of the proposal.
- Why isn’t post-publication peer review more common? Jon Tennant says it takes too much time and effort, and proposes solutions to make it feasible. (LSE Impact blog)
- “[I]n order to rein in journal prices and facilitate open access, journal publishing must be democratized,” argues Danielle Padula. (LSE Impact Blog)
- “Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.” (bioRxiv)
- “To enable replication research, it is of paramount importance to carefully study the reliability of the instruments we use,” say two authors about medical education research.
- “[D]epartments that wish to increase grants and publications would be wise to foster a positive workplace climate.” (Journal of Women’s Health)
- “While it’s fairly easy to find an English-speaking cardiologist completely unconnected to a trial and its authors, it’s far more tricky to find, let’s say, an expert on Mediterranean fish stocks who can review articles in Croatian.” (Liz Wager, The BMJ; Wager is a member of our parent non-profit’s board of directors)
- “Where are the missing coauthors?” asks a paper examining whether collaborators in participatory research should get more credit. (Rural Sociology)
- “So it should come as no surprise that a drug that works in a mouse often doesn’t work in a person.” (Richard Harris, NPR)
- “What constitutes peer review of data?” That’s the question as data sharing practices become more widespread. (Todd Carpenter, The Scholarly Kitchen)
- “The oligopoly of publishers is . . . remarkable on the level of content consumption,” writes the author of a study looking at what gets downloaded from SciHub. (Tracy Vence, The Scientist)
- “In the last decade, on account of all clinical trials conducted by various pharmaceutical companies, nearly 2,800 patients are said to have died between 2005 and 2012 in India.” (Colin Gonsalves, The Hindu)
- From 2006 to 2015, the number of neuroscience papers published grew steadily, says a new paper that also analyzes which ones are cited most. (Neuroskeptic, Discover)
- The American Statistical Association’s intervention on misuse of p values hasn’t had much effect a year later, says Robert Matthews. Read our interview from a year ago. And Frank Harrell says misuse of p values is “one of the most pervasive problems in the medical literature.”
- Martin Ricker looks at a potential “revolution in scientometrics.” (Scientometrics)
- It’s common for patents to be based on initial drafts of peer reviewed papers — which means dire consequences if the results aren’t reproducible. (Jeremy Cubert, JD Supra Business Advisor)
- The UK Research Integrity Office publishes new guidance on good practice in authorship of research papers.
- When the researchers behind a study of eels’ navigational abilities received criticisms from their reviewers, they submitted it for publication in another journal without making changes. (Ryan F. Mandelbaum, Gizmodo)
- A new paper presents a method of detecting hijacked journals, which are typically more difficult for researchers to spot as they piggy-back off of respected, well-known journals. (Science and Engineering Ethics) Another paper in the same journal offers ways to distinguish fake and bogus conference.
- “Most are well aware that the train has left the station with regard to open access, and the landscape is moving digital. […] The challenge is finding the right equilibrium.” (Jennifer Howard, EdSurge)
- Should the American Geophysical Union “be providing sanctions against scientists for their behavior towards other scientists?” asks Judith Curry.
- To improve diversity in STEM fields, it will take more than just ensuring doctoral completions: Minority grad students publish their work less than their peers. (Devin Powell, Science)
- “Infant research is hard…As a result, ours is a field of small sample sizes.” (Lisa Oakes, Infancy)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.