Weekend reads: “Too much success” in psychology, why hoaxes aren’t the real problem in science

booksAnother busy week at Retraction Watch. Here’s what was happening elsewhere around the web in science publishing and research integrity news:

4 thoughts on “Weekend reads: “Too much success” in psychology, why hoaxes aren’t the real problem in science”

  1. In response to the story about a cover letter. I disagree with the position. The cover letter is a total waste of time and energy. Basically, one has to explain, in a pompous way, in some sort of a I-feel-good-about-myself way for the editor-in-chief, what is already scentifically described in the abstract. One has to give guarantees of originality when, in many cases, this is already requested in the online submission. And one has to declare the lack of conflicts of interest, which one already does in a separate section in each paper. In some cases, particularly Elsevier journals, one wastes time on the cover letter explaining what is original, when one also has to submit “Highlights”, a useless marketing tool used to flood the internet with keywords and catch-phrases. The cover letter is a useless concept and should be scrapped. All guarantees must form part of the manuscript itself and should be published with it, i.e., guarantees of originality, no COIs, etc. Make the cover letter extinct. It is an irritant and wastes our time.

    Science is not driven by common sense any longer. It is driven exclusively by the marketing and legal departments of publishers, who push academics and peers to the front-line of the quality control battle, aka peer review, in a bid to ensure quality, for FREE, while they make record profits, and while they keep a keen eye on what politicians and corporations want to extract the most from science, while throwing a few bones to “real science” to feign true interest in it. Why else would Reed-Elsevier pump so much into Republican and Democratic campaigns? Scientists are being explored now as they have never been explored before in the history of science. Predatory is no longer limited to the open access journals that Bell describes on his blog. Predation is now industry-wide. The winning predators are those that are able to give the smoothest, sweet-talking, evasive, marketing-layered responses to questions, those that give the flashiest web-sites with data-bases that can boost the “id” (super-ego) of scientists, giving them a “feel-good” feeling about their PDF files.

    And, in many cases, it is scientists that have their own weakness, naivety and stupidity to blame for these problems. They continue to claim that peer review is the best of bad options, they continue to pay open access fees that are nonsensically extreme (I estimate that a PDF would take, with all issues covered, about 20-25 US$ to produce, maybe at tops 100$ for a top-tier publisher). The hundreds or thousands you are paying are for the extravagant salaries of management and legal departments. So, research misconduct is one of the challenges of the 21st C. The other is exposing the fraud by publishers and showing how in fact we are dealing with alot more predation than just the Beall list.

    The only way to start something new is by exposing the rot in everything that exists, sending in a demolition team to rip it up, and to create something totally new and refreshing, honest, non-market-driven, and that holds all players accountable (authors, editors, publishers). At the moment, the “etraction” drive is to hold ONLY authors accountable as if the system operates only with authors. The editors and publishers are scape-goating, and we need to expose them, too.

    1. Barry, thanks for bringing this to our attention. That’s a fascinating story and the comments section is extremely telling. What I found particularly interesting was that Frontiers had wiped out at least a dozen use r comments. I would be very curious to know what those comments might have said that infuriated Frontiers so much… although I did see a few other comments that contained the word fraud, so the comments must have been really bad. Have you thought about contacting the institutes and the directors of the institutes where these authors work?

  2. I guess the Knight Science Journalism people don’t like semi-critical comments, so I’ll post this here (in re: the science v/s journalism conversation):

    I think this article makes some very good points, but they are undercut by some serious flaws in logic. The most glaring is that you take Hossenfelder’s analogy to the assumptions sports writers can make about the knowledgability of their audiences and leap straight to the inference that “she’d rather read technical articles full of equations”. There is a vast gap between those two positions, enough so that to conflate the two (regardless of the “seems to be” weasel) looks like a strawman, or at least patently unfair. To take the reverse of your statement, you are assuming that scientific reporting would have to be “full of equations” to be more accurate or detailed than it currently is, which is patently untrue, even if we take “full of equations” as code for “full of jargon” or even “assuming at least a four-year degree knowledge of math/statistics/chemistry/whatever”. On the other hand, the analysis you quote from Chris Joyce is excellent–that sports writing is not a monolith (in terms of complexity) and neither need science reporting be.

    Likewise, the “Science” anecdote is pretty weak. This, first, conflates what sort of science journalism scientists like to read with what sort of science journalism scientists think should be aimed at the general public (which is more what Hossenfelder was talking about). Second, it assumes that even if those two are the same, and scientists are happy with the quality of journalism in “Science”, that that is an indicator at some level either of how happy scientists are with science journalism in general, or of some objective measure of quality of science journalism in general–I don’t think either of those is fair. Third, it assumes that because a lot of scientists read “Science”, and “Science” has journalistic articles right there at the front, that that means the scientists are happy with the quality of that journalism. “[S]cientists seem to like it that way” doesn’t sound like data to me.

    I think this really comes down to two sides talking past each other, anyway. Hossenfelder described what bad science journalists do, and probably overgeneralized about science journalists in general, and neglected to consider how bad some scientists are at communicating. In response, this article mostly describes what good science journalists do, and some of the problems they face, while also describing how some scientists are really bad at communicating.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.