The week at Retraction Watch featured a PhD student expelled for submitting a paper without her co-authors’ permission, and a look at the six types of peer reviewers. Here’s what was happening elsewhere:
- “The average patient and even people in health care . . . kind of let their guard down when they’re in that database.” Some companies are using ClinicalTrials.gov for unsavory purposes. (Emily Bazar, Kaiser Health News, via Washington Post)
- What is it with government ministers and plagiarism? Now up: Russia. (Neil MacFarquhar, New York Times)
- “What is clear is that it will continue to be easy to publish weak science, but hard to publish well argued and supported accusations of fraud and misconduct.” Journals have a lack of incentives to expose fraud. (Richard Smith, The BMJ Blogs)
- Why do researchers continue to cite retracted studies in support of their work? Our co-founders try to answer. (Chemistry World)
- “[I]t is remarkable how academic institutions remain silent about the alleged scientific misconduct by some of their researchers.” Trudo Lemmens on two tales of transparency involving clinical trials. (Jotwell)
- Should reporters write about unpublished — eg non-peer reviewed — research? asks Dan Engber. (Slate)
- “Five minutes with . . . Ivan Oransky.” Or as our co-founder put it himself: “4.5 minutes longer than most people want to spend.” (The BMJ) But if you want to hear Ivan talk for 30 minutes, here he is on the The BMJ’s podcast.
- “But now that we are living longer we’ve come to realize that consuming this particular thing on a regular basis, even if it is really, really fun and convenient, is actually sort of bad. Like, cancer bad.” Poking a bit of fun at the flip-flopping of some study results. (Sarah Hutto, The New Yorker)
- How many mice are sacrificed for seriously flawed research? asks Julia Belluz. (Vox)
- “It did not occur to me that ‘impact’ would one day become so controversial.” It’s time to remodel journal metrics, says Nature.
- In STAT, our co-founder reflects on the unsung role of statisticians in science following the death of one he knew well.
- Elizabeth Iorns explains why science needs to prioritize reproducibility. (TEDMED)
- Despite 2009 reforms, “subtle gender bias may continue to operate in the…NIH review format in ways that could lead reviewers to implicitly hold male and female [grant] applicants to different standards of evaluation,” according to a new study. (Academic Medicine, sub req’d)
- A study used by the NFL to show that a program prevents concussions and other injuries doesn’t actually show that, Alan Schwarz reports. (New York Times)
- There’s another preprint server in town, from MDPI. Meanwhile, Roy Caine says preprint servers should introduce a staging process to “rectify the credibility problem.” (Nature)
- How much does publishing in open access (OA) journals in the US and Canada? “Full OA journal [article processing charges] average a little under 2,000 USD while hybrid articles average about 3,000 USD for publications by researchers at research intensive universities.” (PeerJ)
- Peer Review Evaluation wants your opinions on peer review.
- How serious is academic fraud in Malaysia? asks Ong Kian Ming, a member of that country’s parliament, following a recent case.
- Research fraud “requires a fundamental rethinking of the investigative response, and of the culture that has permitted such a phenomenon to flower,” Ian Freckelton tells Robin Williams. (ABC)
- The University of Minnesota is asking people to speak up about research misconduct, which Carl Elliott finds a bit disingenuous.
- Even top researchers routinely misinterpret p-values, writes Andrew Gelman.
- “But as it turns out, irreproducibility in itself was not the problem—rather, it was its extent.” Ahmed Alkhateeb says we should not accept non-replicated scientific results. (Nautilus)
- “What Happens When Underperforming Big Ideas in Research Become Entrenched?” ask John Ioannidis and colleagues. (JAMA)
- It’s time for UK universities to stop gaming the Research Evaluation Framework by hiring star professors for their citations, says a new report. (Elizabeth Gibney, Nature) More thoughts on the REF from Kat Smith and Ellen Stewart. (LSE Impact Blog)
- “How to manage the research-paper deluge?” (Esther Landhuis, Nature)
- The editor of a magazine has been fired for multiple cases of plagiarism. (Keith Kelly, New York Post)
- Time for India to set up a version of the Office of Research Integrity, says The Hindu.
- Frightening news about prostate cancer study seems to be a false alarm, reports Denise Grady at The New York Times.
- One of the hardest parts of doing social-science research is coming up with a question that matters.” Danah Boyd wonders if the field is risking irrelevancy. (The Chronicle of Higher Education)
- A “chronic pattern of scientific misconduct” threatens the U.S. Geological Survey’s reputation for high-quality science. (Randy Showstack, Earth & Space Science News)
- “Once you are found out you will [be] faced with academic misconduct and that will jeopardise your future in higher education.” The U.K.’s growing problem with academic essay websites. (Ben Price, BBC) And the U.S. National Collegiate Athletic Association (NCAA) takes a stand on academic misconduct by athletes and their professors.
- “[H]igh-risk/high-return research is found to be more attractive and financially rational than under the traditional peer review approach,” writes Jonathan Linton. “In other words projects with the highest disagreement amongst panel members should sometimes be selected even though the average panel score may not be the highest under consideration.” (Research Policy, sub req’d)
- How do you manage your data to make it more reusable? (Helena Cousijn, Ludivine Allagnat, Elsevier Connect)
- A study using an art competition as a model for peer review shows that peer review “leads to good research going unpublished.” (David Matthews, Times Higher Education)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
The first sentence has a mistake – the PhD student submitted a paper without *her* co-authors’ permission.
Fixed, thanks.
The story about “academic institutions remain silent about the alleged scientific misconduct by some of their researchers.” by Trudo Lemmens really should have links to the two articles that are discussed there as well. They are actually very easy to read for the layperson and written like a good news-story. Here’s the link to the one that I liked more, although both are good:
http://content.iospress.com/articles/international-journal-of-risk-and-safety-in-medicine/jrs717
To me, this is probably the most serious case of wrongdoing that I was not aware of in many a weekend read. I have no idea how Karen Wagner plans on going about not suffering serious professional consequences from this. Since patient safety was involved, some of the authors could face jail time in the future, including ghost authors and their managers. Never mind just losing an academic position. Although who knows, these are early days of all these misconduct thingies and maybe the penalties will only really start to bite 20 years from now.
Nothing ever seems to happen to successful medical academics for producing bad research outputs, they are usually worth to much to the institution. Maybe it will be different this time.
The article does mention something that I have seen elsewhere that there is hypomania but also other events that are also activating. For anti-depressant trials this is so common that they should be performing an analysis for any of these events.
Interestingly, host of the new preprint site has frequently appeared, and was earlier briefly listed, on Beall’s blog: https://scholarlyoa.com/?s=Mdpi
Looking at the details of the StemGenex trial it seems that there are many ethical problems. They are using a treatment that has no proven efficacy. It is then simply observational as they have no control group, and osteoarthritis is a disease where there is generally a reasonable effect in the control group, so they essentially won’t know anything useful as a result. And they charge people for it. So how it got through ethics is an interesting question and all they mention about an IRB is United States: Institutional Review Board.
From the web page of StemGenex “Stem cell therapy is not FDA approved and is not a cure for any medical condition.”. Always helps to read the fine print, and it is the smallest font on the page, especially if you are going to spend $14,000.