This morning, our thoughts are with the people of Paris. The week at Retraction Watch featured the retraction of a paper claiming dramatically higher rates of sexual trauma among men in the military, and a look at whether gender plays a role in peer review. Also: We’re hiring. Here’s what was happening elsewhere:
- Journals are too slow, too expensive, too limited, too unreliable, and too parochial, says Harlan Krumholz in Circulation: Cardiovascular Quality and Outcomes.
- Hmm: Thomson Reuters is mulling a sale of Thomson Scientific, which produces the Impact Factor, and suitors could include Elsevier, Wiley, and Springer-Nature.
- China’s National Natural Science Foundation has revoked funding to authors found to have had papers accepted because of fake peer reviews, state news agency Xinhua reports. (via Shanghai Daily)
- More brilliance from The Onion: “‘Seek Funding’ Step Added To Scientific Method.”
- COPE has two new flowcharts on how editors should respond to whistleblowers, whether the concerns are raised directly or on social media. Neuroskeptic weighs in on the latter.
- Six researchers are calling on The Lancet to seek an independent re-analysis of the PACE trial of chronic fatigue syndrome.
- “Rampant software errors may undermine scientific results,” writes David Soergel in F1000Research.
- A PLOS ONE correction earns…a correction.
- More than 2,200 authors, and none of them noticed the error in this European Physics Journal C paper?
- 10 ways to get your scientific work the attention it deserves, from Danielle Padula and Catherine Williams.
- Why sex makes a difference: When a lab couldn’t replicate results from published reports, the eventually realized that “the main difference between procedures in literature and in her lab was that they had been using female rats!” Aparna Shah reports at the PLOS Blog.
- “The quality of scholarly editing is ‘extremely uneven,’’’ Brian Bloch writes in Times Higher Education.
- For every conservative social psychologist in academia, there are about 14 liberal social psychologists, notes Arthur Brooks in The New York Times.
- A bee expert claims that the USDA is punishing him for publishing evidence that pesticides could harm pollinators. (Mint Press News)
- Parke Wilde wonders why so many papers about nanotechnology include the same exact language.
- “Journals should publish referee reports and respond to well-founded concerns about papers after publication,” says Nikolai Slavov in eLife.
- The ORI sanctions against former Duke cancer researcher Anil Potti were too light, say Keith Baggerly and C.K. Gunsalus in The Cancer Letter.
- Peer reviews “of new grant applications that are ranked within the top third of applications submitted, are at best imprecise predictors of bibliometric productivity,” note Michael Lauer and Richard Nakamura, both of the NIH, in the NEJM. “Do these findings mean that peer review, as it is currently practiced, is failing?”
- Here are some honest conflict of interest disclosure statements that Arya Sharma would like to see.
- A researcher at Queen’s University in Kingston, Ontario
Torontois being accused of harassment, but the Canadian Association of University Teachers says that’s in retaliation for blowing the whistle on colleagues’ misconduct. (Globe and Mail) - How and why we cover retractions: Our Alison McCook talks to David Shifrin on Science Writing Radio.
- Can evidence clean medicine’s house? A review by MedPage Today’s Sarah Wickline Wallan of Vinay Prasad and Adam Cifu’s new book.
- “[O]bservers placing bets in a stock exchange–like environment are pretty good at predicting the replicability of psychology studies,” reports Bob Grant at The Scientist, based on a new study.
- “A new paper from British psychologists David Shanks and colleagues will add to the growing sense of a ‘reproducibility crisis’ in the field of psychology,” says Neuroskeptic.
- “Given the American faith in medical advances (the NIH is largely exempt from the current disillusionment with government), it is easy to forget that clinical trials can be risky business,” writes former NEJM editor-in-chief Marcia Angell in The New York Review of Books.
- “The editing process in journalism, I think, sometimes offers better protection for the quality of the ideas and writing than our peer review process,” says Atul Gawande in an interview in STAT.
- There are “new types of fraud in the academic world by cyber criminals,” says Mehdi Dadkhah in the Journal of Advanced Nursing.
- Publication bias can threaten “medical providers’ ability to practice evidence-based medicine in its truest form, and this in turn puts patients at unnecessary risk,” writes Martin Mayer in F1000Research.
- The scientific community must “change the way we conduct, report and publish our research,” says Arjen van Witteloostuijn in a petition.
- See what members of a World Science Forum panel had to say about research integrity. (The Scientist)
- “African academics are being caught in the predatory journal trap,” says Adele Thomas in The Conversation.
- And there was rejoicing in the land: Canadian government scientists have been unmuzzled by the Trudeau administration. (The Scientist)
- “Research published earlier this year claiming chimpanzees can learn each others’ language is not supported,” according to a press release about a new paper. Both the original study and the critique are published in Current Biology.
- Concordia University doesn’t plan to retract a controversial report about the asbestos industry, despite acknowledging an oversight. (Montreal Gazette)
- “Science doesn’t work the way you think it does,” says Tom Levenson in The Atlantic.
- Jeffrey Beall doesn’t mince words about a “completely inept” scholarly publisher.
- “Open Data: Can It Prevent Research Fraud, Promote Reproducibility, and Enable Big Data Analytics In Clinical Research?” asks Patrick Myers in The Annals of Thoracic Surgery. (sub req’d, we note with some irony)
- “It’s patients who lose out if doctors and professional journals stop asking the right questions,” says Aseem Malhotra in The Guardian.
- “We published the method in Nature Methods, but unfortunately, that journal has no space for the methods themselves,” writes Lenny Teytelman, announcing support for computational methods on protocols.io.
- White House advisers are calling for greater accountability at the nation’s biolabs after a series of high-profile missteps, Alison Young reports at USA Today.
- “A literature database with smarts?” Kerry Grens reports for The Scientist.
Retractions outside of science:
- A fortune cookie retraction.
- A retraction by Microsoft.
- Ben Carson’s campaign “ended up incorrectly accusing [Politico] of retracting its report on the Republican presidential frontrunner’s murky history with West Point.”
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.
Some may be interested:
Teixeira da Silva, J.A., Dobránszki, J. (2015) Potential dangers with open access files in the expanding open data movement. Publishing Research Quarterly 31(4): 298-305.
http://link.springer.com/article/10.1007/s12109-015-9420-9
DOI: 10.1007/s12109-015-9420-9
You write, of the possibility of data theft, “Although senior players in the publishing world might argue that the risk is not important, perhaps it would be worthwhile for publishers to counter the risk, before such an abuse can take place, by adding security to such files such that cut and copy options are not permissible, or so that view-only options are possible.” Here you are failing to take the advice given earlier in your article, to think like a thief. ANY DATA THAT CAN BE VIEWED CAN BE COPIED. The process can be made labor intensive, but it cannot be made impossible (or even truly difficult, merely tedious).
Lee, thanks for that comment. We are simply alerting the public. But your words are absolutely right: “ANY DATA THAT CAN BE VIEWED CAN BE COPIED.” Until now, iThenticate, for example, can identify copied text (aka self-plagiarism and self-plagiarism), bu what software can detect copied figures, copied tables, or copied data from raw data files that are increasingly accompanying published papers as supplementary files? It’s like a disaster waiting to happen. The phenomenon may already be taking place, but how would n editor ever know the source of the data? One need only use a small amount of imagination to begin to appreciate how serious this situation MIGHT be/get. And it is precisely for this reason that we need more vigilant scientists conducted post-publication peer review, and not less, independent of the risks. And that is why journals that just plaster files on their web-sites without abslutely any protection (encryption) need to start taking a step towards protecting that data.
Queen’s University is not in Toronto, but in Kingston, which is about 265km to the East of Toronto. You may have been fooled by the fact that the newspaper you link to is located in Toronto.
Fixed, thanks.