This week at Retraction Watch featured revelations about legal threats to PubPeer, and a swift expression of concern for a paper denying the link between HIV and AIDS. Here’s what was happening elsewhere:
- Introducing the Proceedings of the Natural Institute of Science (PNIS), “the journal that publishes just about anything (real or fake).”
- Andrew Wakefield’s libel suit against Brian Deer and the BMJ remains dismissed as he loses an appeal. More from Deer here.
- Among top science fiction films, what movie do researchers cite most often? Why, Jurassic Park, of course.
- Is that journal fake? Here are 11 ways to tell. (There’s another rubric here.)
- A snail thought to have gone extinct because of global warming reappears. Should a paper be retracted?
- Music to our ears, if it means an incentive for transparency: Talking to reporters is linked to more citations.
- “Journalists have a responsibility to explain both the benefits and the costs of scientific and technological progress,” says Wade Roush, interim director of MIT’s Knight Science Journalism fellowship program.
- What do journal editors do, anyway?
- Nature Communications goes 100% open access. And Mary Ann Liebert is looking for a Director of Open Access.
- Meanwhile, are researchers experiencing article processing charge (APC) fatigue?
- The Patient-Centered Outcomes Research Institute (PCORI), a non-profit authorized by the U.S. Affordable Care Act, wants input on how its research is peer reviewed and released to the public.
- “[P]olitical shenanigans around the publication of reports from independent scientific advisory committees (SACs) have become all too familiar,” says Fiona Fox of the UK’s Science Media Centre, who wonders how to fix that.
- Here’s “how to critique claims of a ‘blood test for depression.'”
- “Health Researchers Will Get $10.1 Million to Counter Gender Bias in Studies.” (New York Times)
- A Sudanese researcher has fallen victim to a questionable publisher, Jeffrey Beall reports.
- A look into the inner workings of ClinicalTrials.gov: Paul Knoepfler interviews Deborah Zarin.
- An entrepreneur and writer says that “our botched understanding of ‘science’ ruins everything.”
- “Public communication from research institutes: is it science communication or public relations?”
- “[I]t seems clear that people of color are underrepresented in science writing,” writes Francie Diep.
- One reason why many findings can’t be replicated “may be that investigators fool themselves due to a poor understanding of statistical concepts,” argues Harvey Motulsky.
- Why not randomize one eye versus the other, instead of one person vs. another? ask researchers, suggesting that the move could speed up trials.
I found ome parts of the article of Mr. Motulsky somewhat hard to digest. The SEM vs. SD vs. 95%CI debate is a much more complex, and partially subfield/journal-specific issue.
Plotting the SEM is not only tempting due to the smaller error bars resulting in an improved visual clarity of your graphs (though it is definitely a factor to be honest), but also because it displays a very important piece of information, as overlapping SEM error bars always exclude the possibility of significant difference between two datasets. In sharp contrast the SD/95%CI error bars don’t convey this vital information, no matter how excellently they display the variability of the data points.
You may find this feature of SEM unimportant, but once it made me find critical errors in an incorrectly prepared MS that I reviewed for a quite prestigious journal. In some of the figures remarkable significant differences were indicated between study groups, while there was a considerable overlap between the error bars representing the SEM, which is simply impossible. With error bars representing SD/95%CI these problems would have been left undetected…
The bad but not uncommon habit of skipping the methodology section should not be a serious argument against graphing the SEM as long as it is permitted by the journal guidelines, and the variability itself is not of real interest to the readers. However it can cause quite a confusion if the authors did not disclose which parameter the error bars represent… Such papers also tell a lot about the quality of peer review in a journal, and indicate the carelessness of the monitoring editor.
In response to Beall’s comment about researchers experiencing APC fatigue, in which he states “Most of the APCs I see eminate [sic] from greedy publishers wanting to exploit their customers (the authors) as much as they can get away with.”, I ask the following: Surely, Elsevier, Springer, PLOS, Frontiers, and so many other journals and publishers, with APCs of several hundred, and as high as 2500-3000 US$ for a single PDF file should be considered the most exploratory of the publishing companies? Why then, are these same publishers also not listed as “predatory” OA publishers on Beall’s blog, considering that, in the former two cases, they have strong and established OA journal fleets? It would be useful for the scientific community to know what differentiates let’s say (random choice) Academic Journals from Elsevier based on Beall’s “criteria”? Beall has not approved my comments on his blog this week.
I guess the terminology “predatory” applies only on “pay & print” journals. The OA journals by these publishers must implement some filter to keep their impact factor high, so they can ask for more money from the authors. It’s a feedback loop, noone would pay 3000$ for a paper in a 0.001 IF journal, they must keep it high enough that people pay it. With predatory publishers, they don’t even have an impact factor, they don’t care what they publish, they print it as long as you pay.
Proceedings of the Natural Institute of Science (PNIS). You can’t help but wonder how much they had to struggle not to name it Proceedings of the Electronic Natural Institute of Science……
“There are three types of PNIS articles: SOFD, HARD, and Editorial.”
From the ‘botched understanding of science’ article, I liked this quote:
“The problem with that is that it’s absolutely not true. Aristotelian “science” was a major setback for all of human civilization. For Aristotle, science started with empirical investigation and then used theoretical speculation to decide what things are caused by.”
It’s funny because it seems like most of the chemistry articles that I read are written exactly like that. ‘We found some stuff, or saw this catalysis with this molecule, now we’re going to speculate on the mechanism and why it works (otherwise it won’t get into the best journal)’. Maybe it’s just my field of chemistry…
Then again, there are many things about that article that are just plain weird. But I don’t want to get into them really. There is also this quote at the beginning: “Science is the process through which we derive reliable predictive rules through controlled experimentation.”
Not really how I do most of my ‘science’. I just make weird stuff that I think is cool most of the time and it does, maybe, a really interesting reaction.
Talk about weird: and how about these quotes, below, from the same article? . The only thing worth reading in the whole assay is the conclusion that economics is not a science. But, then again, everybody knows that already.
“This is how you get the phenomenon of philistines like Richard Dawkins and Jerry Coyne thinking science has made God irrelevant, even though, by definition, religion concerns the ultimate causes of things and, again, by definition, science cannot tell you about them.”
and
“You might think of science advocate, cultural illiterate, mendacious anti-Catholic propagandist, and possible serial fabulist Neil DeGrasse Tyson and anti-vaccine looney-toon Jenny McCarthy as polar opposites on a pro-science/anti-science spectrum, but in reality they are the two sides of the same coin. Both of them think science is like magic, except one of them is part of the religion and the other isn’t.”
And one more thing: Hi definition ““Science is the process through which we derive reliable predictive rules through controlled experimentation.” would completely discount most of Einstein’s work – at least the parts that were based on ‘thought experiments” – which is quite a bit. And much of present-day theoretical physics, too (‘branes, extra dimensions etc etc). (Just sayin’.)
HIS definition (sorry for the typo)
You say that as though it’s a bad thing.
On purely philosophical grounds, leading with the Swedenborgian “here’s one certain sign that something is very wrong with our collective mind” is a pretty big red flag.
I tried to read the “botched understanding” article but any point the author may have had was drowned out by the overpowering drone of smugness and scorn for all the little-brains.
Science Communication or Public Relations
“The overall opinion of the authors is that science communication activities are almost always a form of PR. The press release is still the most popular science communication and PR tool.”
There is probably a correlation between column-inches of publicity and funding generated.
An example seems to have shown up again last last week when the massively overhyped discovery of “gravity waves” in March turned out to be just cosmic dust.