Researchers’ productivity hasn’t increased in a century, study suggests

Screen Shot 2016-01-19 at 10.50.25 AMAre individual scientists now more productive early in their careers than 100 years ago? No, according to a large analysis of publication records released by PLOS ONE today.

Despite concerns of rising “salami slicing” in research papers in line with the “publish or perish” philosophy of academic publishing, the study found that individual early career researchers’ productivity has not increased in the last century. The authors analyzed more than 760,000 papers of all disciplines published by 41,427 authors between 1900 and 2013, cataloged by Thomson Reuters Web of Science.

The authors summarize their conclusions in “Researchers’ individual publication rate has not increased in a century:”

…the widespread belief that pressures to publish are causing the scientific literature to be flooded with salami-sliced, trivial, incomplete, duplicated, plagiarized and false results is likely to be incorrect or at least exaggerated.

To clarify: The authors didn’t measure individual research productivity by the number of papers an early career scientist publishes. Not surprisingly, when looking at papers from scientists in the first fifteen years of their careers, the authors saw that the total number of papers either remained stable or increased for all disciplines throughout the 20th century. But as the number of papers increased, so has the number of co-authors: from none at the start of the century to an average of between two and seven for all fields by 2013 (except the Arts and Humanities).

When taking co-authorship of papers into account — which the authors call “fractional productivity” — the number of papers each researcher has published has not increased, but has actually mostly declined in the last century.

Even when the researchers ignored co-authorship and counted only papers where researchers are first authors — which the authors refer to as “first-author productivity” — the study found no change in productivity throughout the century.

So even if an individual scientist’s CV looks longer today than it did 100 years ago, the amount of published material they’re producing, so to speak, may actually be smaller or the same.

Not everyone is convinced by the findings, however. Ferric Fang from the University of Washington in Seattle — who is also on the board of directors of our parent organization — is one of the skeptics: 

Count me among those who continue to believe that “pressures to publish are causing the scientific literature to be flooded with salami-sliced, trivial, incomplete, duplicated, plagiarized and false results.” This new study has done little to persuade me otherwise.

Of course, there are other ways to measure a researcher’s contributions to a paper besides co-authorship. It’s also possible that articles have increased in length across the sample time period, so each co-author may now be responsible for more material than previously. 

Daniele Fanelli, first author of the paper from Stanford University in California (and many others we’ve covered on the topic), told us:

We also checked for the average length of the paper, and found suggestions that this is in fact getting longer. These data seemed too preliminary to be included in the paper, but independent studies would suggest the same trend. This is important because it further contradicts the salami-slicing hypothesis.

The study’s sample included many different article types — from editorials to correspondence — which often differ considerably in length from research papers. This variety in length may actually support the paper’s outcomes, Fanelli explained:

We included all [article] types, which again makes the results conservative with respect to the hypothesis, because items such as letters or comments are a recent addition to the Web of Science. There is no evidence to suggest that authors were publishing more correspondence in the past than today, and in any case such records are more likely to be missing from past records than recent ones.

The authors explain the rise in co-authorship in the paper, writing:

Co-authorship might also have increased thanks to improvements in long distance communication technology, as well as a growing support for interdisciplinary research. However, the extremely rapid rise in co-authorship observed in biomedical research and other areas suggests that other factors in addition to the growing complexity of science are at play.

When the authors compared papers from different countries, they found the nations that often place higher pressures to publish — such as the United States and the United Kingdom — had higher fractional and first-author productivity. But there is no evidence that authors from these countries are cutting corners to publish more papers, the authors note.

One limitation of the study is that the authors searched for researchers by surname and the first letters of their first and at least two middle names — such as, in the example they provide in the study, “Vleminckx-SGE.” This may skew the results, since women can change their last name, and many scientists don’t have two middle names. Fanelli added:

We focused on those with three initials in their names (first name, and two middle names). This tended to over-represent certain countries, and perhaps has over-represented certain demographics — a referee for example suggested that we are over-representing scientists of catholic background. This however is unlikely to have affected our central results, which concern trends over time that appear to be global.

Even if individual scientists are not producing as much publishable data as before, they still face growing pressures, the study noted, such as writing grant applications, reports, syllabi and other material — all of which may force them to spend less time doing research. Another possible explanation is that the average time and effort that goes into preparing research papers has increased over time. 

Cassidy Sugimoto, an information scientist at Indiana University Bloomington, questioned whether a lack of increase in productivity could have any bearing on the unfortunate results of pressure to publish, such as salami-slicing or other questionable research practices: 

The main limitation of this piece is implying a link between observed rates of productivity and rates of malpractice…there is no more than a speculative link between [the two].

Fang, in turn, offered some additional evidence that contradicts the study’s findings:

There is simply no denying that most contemporary researchers feel an enormous pressure to publish. For example, a survey of Belgian scientists by Joeri Tijdink and colleagues found that 72% experienced excessive publication pressure, and this pressure was associated with an increase in questionable research practices.

Fang added another important caveat:

The authors have analyzed the average number of papers published by early-stage scientists, but this lumps together both successful and unsuccessful scientists. Since most early-stage scientists do not ultimately pursue an academic career, it would have been preferable to analyze trends in the publication productivity of the most highly productive individual scientists over time.

Moreover, the most “serious flaw” of the study, said Fang, is that it does not account for the fact that last authors of papers, who are the most senior in some disciplines, face the most pressure to publish.

Fanelli, on the other hand, said:

We have no indication that researchers publish more as last authors…But we are obviously still looking at early-career researcher[s]. Properly testing this hypothesis requires a different approach from the one used in this study. It is possible that lab leaders are multiplying their papers by having many junior colleagues, just like, in general, it seems plausible that authors might increase their productivity by salami-slicing their collaborations, so to speak.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.

18 thoughts on “Researchers’ productivity hasn’t increased in a century, study suggests”

    1. it was a commonly-perpetuated belief around the late 90s that the adoption and commodotisation of the computer would create more productivity.

      you’re right in questioning the logic, but at the time it sounded like a great idea to universities, publishers, etc, and so… on they went.

      clearly we are not more productive, even with the aid of a computer, although there many be the odd exception.

  1. I’m surprised they barely touched the topic of norms (granted I didn’t miss the sentences dealing with it); as for the perception of any individual of a phenomenon, the scientists’ perception of “salami slicing” doesn’t come from an objective measurement, and this study seems to prove it, but from personal experience that goes through a complex process of comparisons with what other people do, what they feel should be done, what the believe can be done, etc.. The “increasing workload and research effort” hypotheses are quite interesting, but those possible phenomena could interact with a third one, an increase in the strictness of the norms about what are good and worth to publish research results. This would also help to explain why researchers feel “salami slicing” is endemic while being compatible with no objective increase in scientists’ output.

  2. This measure of productivity is flawed. What is productivity in normal economic parlance? It is the ‘value’ (in some sense) of the stuff produced in a given time relative to the input. Clearly looking at number of papers will be insufficient, because that does not take into account the difficulty of the work that went into the papers.

    I mean the whole point of science, and academia more widely, is to make progress on various topics over time, such that the complexity of the matter under investigation should get harder over time. A theoretical physicist today working on quantum gravity is surely grappling with tougher concepts (and therefore being more productive) than a physicist in 1900 investigating X-rays.

    To use an analogy, this paper is like someone producing a finding that television manufacturers haven’t increased in productivity because (say) the number of televisions produced has stayed constant over time. This ignores the vast difference in television standards – from black and white analog sets, to high definition flatscreens, productivity has risen on this basis.

    1. But if everybody and their mother was of the view that every television manufacturer is producing more and more televisions every year, it would be of interest to discover that in fact per-manufacturer rates of television production have stayed constant over time.

      1. Of course, but then it would be wrong to say that television manufacturer productivity hasn’t increased. And as far as I’m aware, people say “there’s huge pressure on scientists to publish”, not that the absolute number of papers has been going up. It is highly possible that the number of papers has stayed constant, but the work you have to do in between papers has increased, meaning just to publish the same absolute amount of papers the workload will have gone up. After all, even to publish false, salami-sliced data must take some effort!

        1. Alex, you are perfectly right, and that is exactly what the paper discusses. I think part of the confusion here is generated by the world “productivity”. The paper willingly talks about “publication rate”, because the concept of productivity is indeed rather subtle.

  3. A major shortcoming of this study, in my view, is that the method was inappropriate for the question. The authors purportedly wished to determine whether researchers are publishing more papers in the modern era as a result of pressure to ‘publish or perish’. However they only counted 1st author papers, apparently unaware of the fact that 1st authors are typically trainees, whereas PIs are typically last authors. Furthermore they did not count co-authored papers, which reflect the increasingly collaborative nature of science. (The mean number of authors per paper in the life sciences has increased five-fold over the past century). As a result, the authors essentially showed that the mean publication productivity of research trainees has not substantially increased over time, which is not all that surprising or interesting. Their conclusion that ‘scientists are not publishing papers at higher individual rates’ is not supported by the data.

    Another problem is that mean productivity is not an appropriate measure for a non-Gaussian distribution, and research productivity is not normally distributed. Ioannidis and colleagues found that fewer than 1% of contemporary scientists are able to publish continuously over extended periods of time, yet this small group accounts for the majority of researchers with high citation impact and nearly half of all papers overall (PLoS One, 2014). An examination of the publication output of the most highly productive scientists over time might have been more informative.

  4. The authors start with the “widespread belief that pressures to publish…” drive increased publication rates. Another way to assess this is to survey publication expectations or standards for tenure and promotion, since those among are the major pressures driving publication. I have no hard data on this, but a number of decades as a university faculty member have left me very familiar with the rising expectations in every discipline that I am familiar with, and those increasing standards are, in fact, official policy at my university, and have been for at least 20 years.

  5. Thank you all for your comments. It seems that the point of the paper has gotten a little lost. Jeremy Fox got it entirely, and his own blog post is a good read, too: https://dynamicecology.wordpress.com/2016/03/10/scientists-these-days-dont-publish-any-more-than-they-did-in-1900/

    The one and only hypothesis tested in the paper is that the literature is flooded with salami-sliced and trivial results, which are produced in greater number as a cheeky expedient for scientists to publish more. If this idea is true, then clearly we should expect the number of papers per individual to increase. A rough per-capita count would be too imprecise, so we actually counted the papers of continuously-publishing individuals. It took a lot of work!

    Ferric Fang doesn’t like the single-authored counting. But it is not the only measure we tried. We counted all co-authored papers, then fractionally, and also last-authored papers, first authored papers with different cut-offs (removing those with zero, one, two) and we also repeated all analyses with different time windows (8 years, 25 years). In no case do we see a real increase. Frankly, even the gross number of papers didn’t grow as impressively as one could have expected.

    Fang’s suggestion to only look at the highly productive seems to me a circular argument. But note that the data is attached to the paper, so everyone is welcome to try their own analyses. As long as they communicate all negative results, and they cite us, of course ☺

    Cassidy Sugimoto thinks this doesn’t prove misconduct isn’t a problem. And I agree. As I tried to explain to the journalist (and is discussed a bit in the paper) this does not mean that scientists are whining about pressures for no reasons, or that we are working less hard than before.

    But it does suggest that we should rule out over-publication of trivial results as a problem, and focus our energies (and policies) on more substantive issues.

  6. Dr. Fanelli,

    I’m afraid I find your arguments to be unpersuasive. There are many limitations to your analysis, including restriction to the small subset of authors with three initials, which over-represents authors from the UK and Portugal by approximately ten-fold, and the crude use of publication number as a measure of productivity. Even so, your own data show an increase in the number of papers per author over time. However, you then adjust the number of papers by counting co-authored papers only fractionally or not at all. After this adjustment, you conclude that scientist productivity has not increased. Alternatively, one could conclude from the same data that productivity per researcher has increased, along with collaborative research. Your interpretation is based on a value judgment. My experience tells me that you are being unfairly dismissive of the contributions of co-authors, particularly senior (last) authors.

    The suggestion to look at the most highly productive scientists is not circular at all. Researcher productivity is not normally distributed, so the comparison of means will fail to capture important trends. John Ioannidis and colleagues have shown that the top <1% of researchers who are able to publish in a continuous, uninterrupted fashion are responsible for more than 40% of all papers and most of the highly-cited papers (PLoS One, 9:e101698, 2014). If one examines the number of papers per author in this important group, productivity measured as paper per author has risen more than 25% over the period 1996-2011.

    I stand by my earlier criticisms. Nevertheless, I thank you for performing this provocative study, even though I differ with regard to the interpretation of your data.

    1. But if the conclusion is completely different based on your interpretation, then how are this study’s conclusions valid?

    2. Dr Fang, my study is not provocative. It was the genuine test of clearly defined hypothesis. Mutual criticisms is the lymph of research, but the challenge for all men and women of science to to examine data with cold logic, and revise one’s opinions if necessary.

      I, for one, have published evidence years ago which I thought supported a similar hypothesis to this one. A few studies down the line, I am revising my beliefs. And, with all due respect, I would invite you to keep an open mind, too.

      Invoking limitations is not enough – all studies have them. As you know, the three-initial limitation is amply discussed and cross-examined in the paper. The trends are observed across countries, which shows that the sampling could not have affected the central claim of the paper. If you think it did, then you need to propose a testable explanation – i.e. how could the three-initial sampling show no increase in productivity across the globe and across all disciplines, increase which a different sampling would show?

      As for the 1% claim, I believe the figure might have been over-estimated for various reasons, but that’s beyond the point. Even accepting it, I am not sure how it contradicts the claim. Are you suggesting that this 1% is working alone? Are they not, on the contrary, benefiting from the help of lots and lots of collaborators? In which case, what is the publication rate of the latter? Are we pretending they do not exist?

      Indeed, how can we ignore the effect of collaborations altogether? If ten people work on a paper, do we count it as ten papers or one? I could agree that it is something in-between – but the point here is that such paper would likely be more dense of information and data, not less, as some people simplistically suggest.

      Moreover, by definition this 1% would not represent the average scientist, and presumably is not salami slicing papers. So I really struggle to understand what your point about the 1% is. The first version of the manuscript had two or three paragraphs discussing how there is a subgroup of impossibly productive collaborators (in our sample alone, you have people publishing hundreds of papers in 15 years), which are likely senior lab members, and/or prominent figures that get added to lots of papers. So, the people you are thinking about were part of the sample, too. They are just not that common. And their behaviour is not salami-slicing as normally intended.

  7. Dr. Fanelli,

    You suggest that I do not have an open mind because I do not accept the simplistic hypothesis that you are proposing. Let’s try not to make this argument personal. Your hypothesis is that mean scientific productivity, measured by first-author or ‘fractional’ co-authorship by early-stage investigators, has not increased in a century, therefore ‘salami-slicing’ does not occur. I am submitting an alternative hypothesis– that one cannot ascribe a single behavior, e.g., ‘salami-slicing’, to all scientists, and therefore your calculation of adjusted mean productivity of a limited subset of scientists neither proves nor disproves your hypothesis. There is enormous heterogeneity in the publication behavior of individual scientists, and this reflects diverse motivations and responses to these motivations. I suggest that the increased productivity of the most highly productive scientists (the ‘1%’) reflects, in part, a response to a system that places a value on the raw number of publications that one generates. Some of this may be inflated by large collaborative groups, honorary authorships and the like, but some also reflects a true increase in scientific productivity (here narrowly measured as publication number). I agree that this is not salami-slicing. However, elsewhere in the vast scientific enterprise, at the other end of the researcher spectrum, there are individuals publishing repetitive papers, sometimes containing data that overlaps with other publications from the same authors. This is a form of ‘salami-slicing’ and represents another type of response to pressure to publish. It is not helpful to pretend that this pathological publication behavior and the incentives for it do not exist. Your study is provocative because it provokes reflection and debate. The data are what they are, but they are open to multiple interpretations. My interpretation is based on my independent assessment as well as other observations in the literature and thirty years’ experience as a working scientist. Let us agree to disagree.

  8. Dr. Fang,

    I think the only point on which we really disagree is the use of the term hypothesis.

    “that mean scientific productivity, measured by first-author or ‘fractional’ co-authorship by early-stage investigators, has not increased in a century, therefore ‘salami-slicing’ does not occur. ”

    I would not call this my “hypothesis” but my findings and conclusions (although I would not talk about productivity, as discussed above, but purely about “publication rate”).
    The hypothesis, amply reported in the literature when discussing the ills of contemporary science, is that the rate of publication HAS increased, BECAUSE of salami slicing.

    Your argument that there is much diversity and heterogeneity is perfectly fine, and it does not contradict my conclusions. Quite the contrary, in fact (it is exactly the point I made in my previous reply). So, we can agree that we agree.

    It seems to me, however, that many academics who discuss these issues in the literature, make EXACTLY that kind of generalization (if not, why should policies have been made against having too many papers, as have been adopted in the Netherlands and Germany? Though interestingly the latter is making a step back).

    Let’s face it. If this study had found – as it perfectly well could have – that publication rate is growing even after controlling for co-authorship, academics all over would have uncritically hailed this finding as evidence of the nefarious consequences of pressures to publish and how pervasive they are.
    I doubt they would have opened a PubPeer page against it, as someone immediately did here.

    I respect your experience and intuitions. But without hard data to prove the contrary, I don’t think we should insist that salami slicing is a common problem in science.

  9. Of course, it is possible that salami slicing and other forms of duplication are, in fact, a ‘common problem’, but perhaps only in certain, more ‘visible’, disciplines whose findings have a more critical impact on public health. For example, consider a study by von Elm, et al. (2004) who reported that of 1,234 articles reviewed in the area of anesthesia and analgesia, 5% were found to be duplicates that did not even give any indication as to the original publication. There is also an earlier study by Schein (2001) who found that 14% of 660 articles in surgery represented instances of redundant publication. Surely, such findings show that duplication is (was?) a major problem in those disciplines.

    References

    Schein, M. (2001) Redundant publications: from self-plagiarism to “Salami-Slicing”. New Surgery, 1, 139-140.

    von Elm, E., Poglia, G., Walder, B. & Tramèr, M. R. (2004). Different patterns
    of duplicate publication. Journal of the American Medical Association. 291, 974–980.

    1. Miguel, thanks for the references. We should look more into that. Some of those figures (14%, in particular) would be critically dependent on how “redundant” is defined and assessed. Having said that, it would be interesting to see if the trends we describe look different in these areas.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.