The week at Retraction Watch featured a frank admission of error by a Nobel Prize winner, and a look at five “diseases” plaguing science. Here’s what was happening elsewhere:
- 17 researchers sanctioned for misconduct by the U.S. Office of Research Integrity later received over $100 million in NIH funding. Our Alison McCook’s latest for Science.
- “We’re truly collateral damage.” The firing of an NIH scientist throws two dozen researchers into an unusual publication ban. (Jennifer Couzin-Frankel, Science)
- A junior doctor who expressed “horror, shame, and remorse” after it came to light that she faked data — and authors — in a study has been deemed fit to practice medicine again. (Hannah Somerville, The Oxford Times) See our previous coverage of this story here.
- A study that promised to crack the code of aging couldn’t be replicated – but led to a lesson in good science. The latest from our co-founders in STAT.
- Should the scientific record be corrected before the US Office of Research Integrity makes a finding? Perhaps so, says the director. (Debra Parrish, Science Editor)
- Predatory publishers: What to do? Cameron Neylon argues blacklists are fundamentally unethical. (The Impact Blog) The Open Access Scholarly Publishers Association explains how it supports innovative models of publishing while also keeping aware of predatory publishers. And the World Association of Medical Editors provides tips for identifying predatory publishers.
- Should Google remove retracted studies from search results? Dominique Brossard offers some thoughts on how to limit the spread of fake news. (press release)
- Researcher Rolf Zwaan explains what he found when he probed data inconsistencies that have led to a university investigation and a retraction. See our initial coverage of the situation here.
- A poorly written abstract can incline editors toward rejecting the paper. Faye Halpern and James Phelan offer pointers on how to write a more effective one. (Inside Higher Ed)
- The former students of Harvard’s Lee Rubin — one of whose current students has a restraining order against him — come to his defense in a letter published in Science. (sub req’d) See our coverage of the story here.
- A journal won’t publish a paper. The author says that’s editorial misconduct. (Mad in America)
- As much as 75% of URLs referenced in scholarly papers change over time, “raising significant concerns over the integrity of the scholarly record,” write Martin Klein and Herbert Van de Sompel. But there’s a way out of that mess, they argue.
- A preprint highlights the collaborative Data Champions initiative at the University of Cambridge as an example of good data management. (bioRxiv)
- A testosterone study released this week included a glaring ethical lapse that could potentially have harmed a significant portion of the participants. (Richard Harris, NPR)
- “[I]f we judge scientific quality based on the [Journal Impact Factor] or other journal based metrics we are either guided by invalid or weak arguments or in fact consider our uncertainty about the quality of the work and not the quality itself.” A preprint called “The impact factor fallacy.” (bioRxiv)
- Some publishers hope artificial intelligence can fix peer review, even though it’s a daunting task. (Nick Stockton, WIRED) Listen to our co-founder Adam Marcus’ thoughts on AI peer review on Science Friday here.
- “Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed,” according to a new PLOS ONE study.
- “For this essay, I planned to use only references to open access articles, just to prove a point. It turned out to be impossible, which is a better proof than I would have liked.” Diana Wildschut reflects on citizen science. (Futures)
- Springer Nature launches Recommended, a personalized suggestion service to help researchers shift through the 4000 research papers published daily. (Mark Staniland, Nature.com’s Of Schemes and Memes blog)
- A preprint outlines the progress that’s been made toward the Reproducibility2020 goal and what priorities remain to be dealt with. (bioRxiv)
- “Yet, for every 100 hours spent at work, these female students are 15% less likely to publish a paper during that first year than their male counterparts are.” A new study reveals the publishing gap for female PhD students. (Maggie Kuo, Science)
- Allan Gaw discusses misconduct and fraud in clinical research with Richard Smith, the former editor of the BMJ, for the latest episode of The Business of Discovery podcast. (Smith is a member of the board of directors of our parent non-profit organization.)
- Mulubrhan Balehegn offers some options on how to discourage authors in developing countries from publishing in predatory journals, and what to do with faculty who have already advanced their careers through these publications. (International Information and Library Review)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
The interesting Galbraith paper (Table 2) in tracking respondents from 1992 to 2016 with ORI findings/sanctions, as noted here, found later new NIH funding for 17 of 284 such respondents (including 3 who were postdocs and 12 who were faculty at the time of their research misconduct, which led to their Office of Research Integrity findings being published by name). The paper also states that 13 of the 284 continued to receive NIH funding for their ongoing grants.
However, the paper does not describe any analysis as to how many, if any, of these 17 (or 13) respondents were debarred from Federal funding by ORI/HHS. The data in Table 1 indicate that 152 of the 284 tracked respondents were debarred (31 were postdocs and 52 were faculty).
If the ORI did not impose debarment on the 17 respondents who later obtained new NIH funding (nor on the 13 respondents who continued to receive NIH funding for their ongoing grants), then there would be nothing particularly “surprising” about the conclusions in the paper [that the 17 (6%) had received a total of $101 million in support on 61 new projects].
Amazing. The average PI in biomedical sciences is starving for grant funds, but the NIH is happy to give liars $5M+ on the average. Knowingly committing fraud should be a lifetime ban, always. There are too many good scientists struggling to do good work to let the liars have a second chance.
“They committed misconduct, then earned $100 million in grants: find out their ONE WEIRD TRICK!”
-Buzzfeed
Nice 🙂