The title of this post is the title of a new commentary in Administrative Science Quarterly by Gerald Davis of the University of Michigan. Its abstract:
The Web has greatly reduced the barriers to entry for new journals and other platforms for communicating scientific output, and the number of journals continues to multiply. This leaves readers and authors with the daunting cognitive challenge of navigating the literature and discerning contributions that are both relevant and significant. Meanwhile, measures of journal impact that might guide the use of the literature have become more visible and consequential, leading to “impact gamesmanship” that renders the measures increasingly suspect. The incentive system created by our journals is broken. In this essay, I argue that the core technology of journals is not their distribution but their review process. The organization of the review process reflects assumptions about what a contribution is and how it should be evaluated. Through their review processes, journals can certify contributions, convene scholarly communities, and curate works that are worth reading. Different review processes thereby create incentives for different kinds of work. It’s time for a broader dialogue about how we connect the aims of the social science enterprise to our system of journals.
You can read the whole piece here (it’s open access). As always, the floor is open.
Hat tip: Rolf Degen
Because people think that a journal is better than arxiv. But until they incorporate even basic plagiarism detection and image manipulation detection into *all* journals then every journal suffers from guilt by association.
Title promises more than the paper delivers.
Until quite recently the field of active scientific research in any field was really very small. If you read every paper published by the top ten groups in a field, you were up to speed with the very best people. Those people published exclusively in a very small group of journals and every academic library carried these journals in hard copy.
Whether you could join this elite group was very problematical. As an outsider you might not even get considered for publication in these journals and if you published in the lower tier journals maybe no one would ever even see your work. This was still the case at least until 1988 when I entered graduate school.
But now there is no limitation in access and exposure to any and all work done anywhere. Database searches by keyword finds all relevant documents regardless of publisher, and most active researchers can get even obscure papers fairly easily.
Does it make a difference? There was plenty of rubbish published in the old days by the ‘elites’. There are many more worthwhile articles published in every sub-sub-field that it has become impossible to master a field like in the old days. Many people think this indicates a loss of selectivity. But I think it is just due to an increase in the number of people working and the democratization of the process.
You still need to decide for any paper whether it is good or bad, on it’s own merits, if you intend to use the results in your own work. This, I believe, hasn’t changed.
Reblogged this on Schleudergang and commented:
If there were absolutely no structures behind how you publish things – how could you assure a certain quality?
That is essentially the question. It is very easy to have a model where everything is published and then the market sorts everything out, but it is possible that no one will read the paper critically, although many will use the results. With the traditional model reviewers can also fire off any question they feel like and expect an answer. Moth authors will make an effort to answer because they have publication as a prize, I’m not so certain that they will do so post-publication and the common answer will be that it didn’t seem necessary. People complain about extra experiments, or extra simulations or extra data but it is likely to produce a better paper.
One question is whether the problem is people doing a few months of work then publishing a paper. If they were required to build something more complete it might demonstrate the problems in their methodology. Of course fewer papers, but what matters is only having a publication record similar to the average. If everyone is reduced then it will be fine.
Agreed with Ken: There are few incentives for providing rigorous post-publication reviews, whereas reviewers who serve on editorial boards typically put a lot of thought into pre-publication review. As a result, some fairly mischievous papers can end up being widely cited as “peer reviewed science.”
Recent evidence: in February 2014, a set of researchers published papers in two different journals using different subsets of the same data to draw rather different conclusions about the health correlates of vegetarianism. The article in Wiener klinische Wochenschrift has been shared 5 times; here are the last 3 sentences of the abstract:
“Subjects eating a carnivorous diet less rich in meat self-report poorer health, a higher number of chronic conditions, an enhanced vascular risk, as well as lower quality of life. In conclusion, our results have shown that consuming a diet rich in fruits and vegetables is associated with better health and health-related behavior. Therefore, public health programs are needed for reducing the health risks associated with a carnivorous diet.” (http://link.springer.com/article/10.1007%2Fs00508-013-0483-3)
The article in PLOS ONE has been one of the most-viewed articles since it appeared (currently at 166,903 views) and has been shared 19,541 times, frequently with tags such as “Scientists show vegetarianism causes cancer and mental illness!” The last two sentences of the abstract:
“Moreover, our results showed that a vegetarian diet is associated with poorer health (higher incidences of cancer, allergies, and mental health disorders), a higher need for health care, and poorer quality of life. Therefore, public health programs are needed in order to reduce the health risk due to nutritional factors.” (http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0088278)
[One post-publication commentator pointed out these seemingly incongruous results, and an author responded, sort of, but that has not stopped the PLOS ONE paper from becoming an Internet juggernaut.]
Whether you like it or not, you have answered the question to this story: the PLOS paper was viewed 166,000+ times. How else could a scientist achieve this outside of the framework of a journal, on a massive scale? I am not against journals, or their existence, just against the lack of transparency and infrastructure to deal wth claims, problems and errors in the literature.
If we didn’t have journals, this site would probably have far fewer retractions to write about… CONSPIRACY!!!???!!!! (please note sarcasm)