Got “significosis?” Here are the five diseases of academic publishing

John Antonakis

John Antonakis is psychologist by training, but his research has run the gamut from showing kids accurately predict election outcomes just by looking at candidates’ faces to teaching charisma to people in leadership positions. Now, as the newly appointed editor of The Leadership Quarterly, he’s tackling problems in academic publishing. But his approach is somewhat unique – he sees these problems as diseases (ie, “significosis”) that threaten the well-being of the academic literature. In a new paper, he’s calling on the efforts of researchers, editors, and funders to prevent, diagnose, and treat the five diseases of academic publishing.

Retraction Watch: What prompted you to think about problems in science publishing in terms of diseases? 

John Antonakis: A disease is usually thought of as some sort of disorder having some symptoms and causing some debilitating outcomes on a body—in this case, the body of knowledge. Reflecting on how science is done, I noticed that the outcome—what is being published—is oftentimes disease-riddled. Such findings will either mislead or not make a difference to policy and practice. Working backward from observing the outcome, I tried to figure out what the causes could be. My conclusion is that the (a) collective practice of how science is done, (b) the conditions under which is it done, and (c) what incentives are given to researchers by journals, research sponsors, or universities, appear to be largely responsible for creating the breeding ground for these diseases.

RW: Let’s get into some of the details: What are the five specific “diseases” that are infecting science?

JA: These diseases, which are strongly interlinked and probably have overlapping causes, include:

  1. Significosis is the incessant focus on producing statistically significant results, a well-known problem but one that still plagues us. Because the players in the publication game only consider statistically significant results as interesting and worthwhile to publish, the distribution of effect sizes is highly skewed. The potentially wrong estimates feed into meta-analyses and then inform policy. A result could be significant for many reasons, including chance or investigator bias, and not because the effect is true.
  2. Neophilia is about an excessive appreciation for novelty and for snazzy results. There is nothing per se wrong with novel findings, but these are not the only findings that are useful; and, of course, sometimes novel findings turn out to be false. Replications of a previous effect, for instance, may not seem very interesting at the outset; but they are critical to helping understand if an effect is present or not. Many journals simply do not consider publishing replications, which I find disturbing. In my field, I am rather certain that many published findings and theories are flawed; however, they will never be challenged if replications—and null results studies too—are never published.
  3. Theorrhea refers to a mania for new theory is something that afflicts many branches of social science. That is, there is usually a requirement in top journals to make a new contribution to theory, and that research should be theory driven and not exploratory. How is it possible that we can have so many contributions to theory? Imagine just in the field of management research, having say 5 elite journals, each publishing 80 papers a year. How is it possible to produce several hundred new contributions to theory every year, as compared say to physics, which has very strong theoretical foundations, but operates more slowly in terms of theory development and also appreciates basic research?
  4. Arigorium concerns a deficiency of rigor in theoretical and empirical work. The theories in my field, and in most of the social sciences too, save economics and some branches of political science and sociology, are very imprecise. They need to be formalized and not produced on the “cheap” and in large quantities. They must make more precise, realistic, and testable predications. As regards empirical work, there is a real problem of failing to clearly identify causal empirical relations. A lot of the work that is done in many social sciences disciplines is observational and cross sectional and there is a dearth of well-done randomized controlled experiments, either in the field or the laboratory, or work that uses robust quasi-experimental procedures.
  5. Disjunctivitis is a disease that is about a collective proclivity to produce large quantities of redundant, trivial, and incoherent works. This happens because of several reasons, but primarily because quantity of publications is usually rewarded. In addition, researchers have to stake a name for themselves; given that novelty, significance results, and new theory are favored too means that a lot of research is produced that is disjointed from an established body of knowledge. Instead of advancing in a paradigmatic fashion, researchers each take little steps in different directions. Worse, they go backwards or just run on the spot and do not achieve much. The point is that the research that is done is fragmented and is not helping science advance in a cohesive fashion. Findings must be synthesized and bridges must be built to other disciplines (e.g., evolutionary biology) so that we can better understand how the world works.

RW: Are some of the publishing diseases more harmful than others?

JA: I do not know—they are all bad and we can see that in the symptoms they produce including p-hacking, HARKing (i.e., hypothesizing after the results are known), running many treatments and then selectively reporting results, cutting corners so as to crank-up the output, lack of transparency in reporting, and so forth. This all has to stop because it is sullying science—the replication crisis, retractions and the like. Mind you, the media is doing its job and I am happy that they cover these issues; but one negative externality is the public is becoming mistrustful of science. Take the case of “power posing”—studies show that original study was a fluke (or maybe a result of p-hacking or not blinding the experimenters); yet, one of the authors, Amy Cuddy now has one of the most viewed TED talks of all time and a best-selling book on the topic. Such high-profile cases do not help science. Then we have climate change deniers, anti-vaxxers, even flat earthers. Some political parties even actively work to undermine science. We have to restore the public’s trust by doing our science more rigorously and cohesively, so that we can speak with one voice to the public and policy makers.

RW: If the publishing problems are diseases, what are the remedies?

JA: We have to valorize different types of research within the social sciences, including studies that produce null results and replications — if well powered and robustly designed of course. Exploratory studies and more basic research must also be better regarded. We also need to put more rigor in what we do so that studies better replicate, and can be replicated too (thus, more reporting transparency is required). In addition, more research synthesis efforts are required, and in particular we need to do more “meta-research” as John Ioannidis calls it: Performing research on research so that we can figure out how to do our science in a more valid manner .

RW: What changes are you making at your journal to address these problems?

JA: As mentioned, I will be accepting a broader range of articles and making clear that contributions do not only come from statistically significant and novel findings. I will also be desk-rejecting more manuscripts, because too many submissions put a burden on the editorial team, the editorial board, and reviewers. Authors should more carefully consider what they submit and ensure it is robust and valid.

I am looking forward to my term as Editor in Chief of The Leadership Quarterly and am very optimistic that we can help fix the issues that challenge our science. However, more journals must join in the effort to eradicate the diseases and I believe that this will happen, in due course.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

7 thoughts on “Got “significosis?” Here are the five diseases of academic publishing”

  1. Interesting viewpoint! This post highlights one of my pet peeves in the social sciences, which is nicely summarized in the neologism of theorrhea. To my mind, a theory is a Big Idea that can organize an enormous body of thought, yet still make specific predictions. Notable theories include evolution, continental drift, big bang, thermodynamics, homeostasis, and global warming. A theory is not some little ex post facto hypothesis to explain a particular experiment, even if nice math is used to do so.

    Does psychology (or sociology) actually have any theories?

    1. Yes, in my field which borders with psychology and biology (cognitive neuroscience) there are ‘models’ that predict basic human in detail. For instance the ‘dual route model’ of speech perception gives a detailed mechanistic account of the network of brain regions involved in speech perception, and how precisely the various neural pathways make speech perception/production possible.

  2. Hmmm. The first part of the disjunctivitis definition sounds like the opposite of neophilia. So which is it? Are scientists over-replicating (redundancy) or under-replicating? And by the way, self-replication is important too. How many instances of misconduct are discovered by members of the same lab who cannot reproduce their colleagues findings? Seems like a lot to me. Thus, I’d multiple publications with different first authors from the same lab are reporting the same findings, that is evidence for reproducibility.

    1. Hi Mitch:

      When you have a moment, take a look at the paper, which is free to download for the next year here http://dx.doi.org/10.1016/j.leaqua.2017.01.006

      I am all for replications (and null results studies too) and talk about this point extensively in the paper; most replications need to be done so that theories are trimmed and effects well understood. Disjunctivities is a very different sort of malady, and actually not about replicating at all but advancing in a non-paradigmatic fashion, favoring quantity, and fetishising significance and novelty.

      Keep well.
      John Antonakis

  3. Publishing redundant studies is not the same as replicating studies. When you replicate a study, you do the exact same thing that the original author did with a different data set thus strengthening the validity of that theory. However, what we have is people trying to create their own “theories” and constructs which are all very similar and thus leads to redundancy.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.