Why publishing negative findings is hard

Jean-Luc Margot
Jean-Luc Margot

When a researcher encountered two papers that suggested moonlight has biological effects — on both plants and humans — he took a second look at the data, and came to different conclusions. That was the easy part — getting the word out about his negative findings, however, was much more difficult.

When Jean-Luc Margot, a professor in the departments of Earth, Planetary & Space Sciences and Physics & Astronomy at the University of California, Los Angeles, tried to submit his reanalysis to the journals that published the original papers, both rejected it; after multiple attempts, his work ended up in different publications.

Disagreements are common but crucial in science; like they say, friction makes fire. Journals are inherently disinterested in negative findings — but should it take more than a year, in one instance, to publish an alternative interpretation to somewhat speculative findings that, at first glance, seem difficult to believe? Especially when they contain such obvious methodological issues such as presenting only a handful of data points linking biological activity to the full moon, or ignore significant confounders?

Margot did not expect to have such a difficult experience with the journals — including Biology Letters, which published the study suggesting that a plant relied on the full moon to survive:

What surprised me was when the journal, Biology Letters, had published such a weak result that has far-reaching implications in plant evolution, refused to publish the alternate hypothesis.

The Biology Letters paper, published in April 2015, suggested that the non-flowering plant Ephedra foeminea relied on light from the full moon to pollinate. The plant was said to secrete translucent globules of sugary liquid that attracts nocturnal pollinating insects, many of which navigate using the moon. Originally, the researchers thought these globules were present during a certain time of the year, but were astonished to find them appear exactly when the moon is full.

What made the result “weak,” in Margot’s opinion, was that the authors presented only three times when the globules were present within one or two days of a full moon. With so few data points, it’s entirely possible the results were due to chance, he said:

When I read their paper, I was very surprised to find that they only had three data points and they were claiming this association with the full moon on the basis of [these] three data points.

You would expect to find multiple studies in the literature with that type of coincidence with the full moon.

Margot appealed the journal’s decision to reject his letter, but was turned down again. Margot’s rebuttal was later published last October in the Journal of Biological Rhythms (JBR) after being rejected by two other biology journals (in addition to Biology Letters). (One month later, he published a corrigendum that presented a more accurate way of calculating the time to a full moon, but did not alter the argument or conclusions, he said.)

Catarina Rydin, a botanist from Stockholm University in Sweden and co-author of the original study, defended her paper:

The reasons we found it important to report our finding was not the strong statistical power of the results, but a number of small but relevant biological observations that all point in the same direction. And, more importantly, we wanted to report this so that also other scientists can contribute new information on the topic, in Ephedra and other plants.

Commenting on Margot’s criticism, Rydin added:

The paper by Margot is however not very interesting in our opinion, as it does not contribute any news to science; no new data, and no errors in our calculations. That our hypothesis is based on very few data points is clear from our paper, and the risk of Type I errors is known to every scientist. Sometimes, it is important to have the courage to present also quite bold hypotheses in order for science to progress.

We reached out to Rick Battarbee, the editor-in-chief of Biology Letters based at University College London, who told us:

We do indeed publish alternative viewpoints on the same data… However, on this occasion our peer reviewers advised rejection and this advice was upheld by our Editorial Board. Following your email we have scrutinised the reviewer reports and the manuscript again and are of the opinion that the course of action we took was correct.

The comment raised valid points and in particular it highlighted the need for more observations to test the hypothesis further, as in fact the original authors had called for. However, the reviewers of the comment were happy that the hypothesis itself remained valid, as were we, and this was the basis of our decision.

Most recently, the Journal of Systematics and Evolution has published a review paper citing the pollination claim as a fact. Margot has reached out to the paper’s  first author, but has not received a response so far.

A few years ago, Margot came across a 2004 study published in the International Journal of Nursing Practice (IJNP), which claimed that admissions of patients in a hospital in Barcelona correlated with the lunar cycle. Here, too, Margot spotted some issues — the authors didn’t consider that lunar cycle times vary, nor factor in other potential variations, such as by days of the week. Taking these additional analyses into account undermined the article’s main conclusions, he said.

But once again, he found it difficult to get his analysis accepted, submitting to five different nursing journals before getting his paper published in Nursing Research last May – roughly a year and a half after he first submitted it to the original journal.

Lin Perry, editor-in-chief of the IJNP from the University of Technology Sydney in Australia, said her journal did not reject Margot’s manuscript, but the paper was “unsubmitted” three times because it did not comply with the journal’s formatting requirements. Perry added:

Times change, standards and expectations change; all readers should consider all papers in the light of current standards and expectations for academic and clinical practice… I welcome discussion and critique of papers published in the International Journal of Nursing Practice, but would prefer to focus on current rather than historical pieces.

But Margot was not convinced that formatting errors were the case in each instance:

On each one of my three submission attempts to IJNP, I received a message from the journal stating ‘We have forwarded your manuscript to reviewers for their comments.’ I do not understand how the manuscript allegedly failed on three occasions to meet the journal’s formatting requirements.  It seems rather odd that the ‘unsubmission’ would take place after sending the manuscript to reviewers.

Marion Broome, editor-in-chief of Nursing Outlook, another journal that rejected Margot’s paper, based at Duke University in Durham, North Carolina, told us:

It was rejected because I do not have reviewers or readers who have expertise in the kinds of methods and analyses the authors chose to use in this manuscript.

Margot said that having an astronomer on the review team of the original papers would have helped their reviewing process. With interdisciplinary research on the rise, journals must look to contact experts outside of their immediate fields, he noted.

Journal editors are not alone in hesitating to publish negative results, notes Margot; media outlets also have a similar culture.

The pollination paper, for example, received extensive media coverage, featuring in New Scientist, Science, National Geographic, Smithsonian Magazine, The Independent, The Daily Mail and more.

Out of concern, Margot contacted the reporters of every article by email, Twitter or both, informing them of the limitations of the study.

But only one website — Mother Nature Network — added a cautionary statement, said Margot, changing “undeniable correlation” to:

research shows that there appears to be a correlation. There is, however, disagreement among scientists that the shrub’s pollination is related to the lunar cycle.

And in case you were wondering — Margot has also debunked the long-standing theory that more women give birth during a full moon.

Update 2/26/16 3:06 p.m. eastern: Margot just informed us that a letter to the editor he submitted to the Journal of Systematics and Evolution about a recent review that cited the Biology Letters paper was rejected.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.

 

17 thoughts on “Why publishing negative findings is hard”

  1. “a number of small but relevant biological observations that all point in the same direction.” – this is the definition of confirmation bias.

    That being said publishing negative data is more difficult because of the power issue. The bar for demonstrating the lack of differences is much higher than the presence of differences. One of the most common arguments you would see is that “the authors failed to see the difference due to a small sample size and, therefore insufficient power of the analysis.”

    1. “Sometimes, it is important to have the courage to present also quite bold hypotheses in order for science to progress.”

      That’s true, but it’s also important to try to knock those hypotheses down. That’s how science ~actually~ progresses.

    2. Yurii Chinenov – I am not a scientist but a science follower. I wonder whether if we were to differentiate “inconclusive” findings from “negative” findings, the latter providing adequate power. Whether the suggestion is practical or not, does it have at least some sense?

  2. I’ve submitted a paper that claims a stopped clock always tells the right time. True I only have two data points (coincidentally at 2 am and 2 pm when the clock has stopped) but I’m going to publish anyway.

    1. Ha, your study is so underpowered. I have invented a clock that turns backwards and tells the correct time all the time. I have 4(!) data points.

  3. I have often wanted to start a journal just for negative data. Imagine how far science could progress if individual labs could stop repeating the same failed experiments over and over because someone finally published that it does not work. Sometimes people can sneak the negative data into the discussion or a talk on similar information, but not often enough to stop the repeats.
    There is no help for those who think “But it will work for me even though it didn’t work for them.”

    1. There are already a number of journals that focus specifically on ‘negative’ data, including the Journal of Negative Results in BioMedicine (http://jnrbm.biomedcentral.com/), which has been publishing for nearly 15 years. The issue is that many authors do not seek to publish their null or inconclusive results, and these are treated and considered the same way as ‘positive’ research studies. Rather than positive discrimination towards null results, instead journals should move away from considering issues such as ‘impact’ and ‘novelty’ when deciding what to accept and rely on quality criteria relating to the actual science. This would prevent such issues as are described above from arising in the first place.

      Declaration of Interest: I am the Publisher for Journal of Negative Results in BioMedicine, which I have mentioned, and work for BioMed Central, who publish a number of journals (including the BMC-series), which esposes the ethos of juding articles by their scientific merit, rather than their potential impact.

  4. It did not comply with the journal’s formatting requirements?

    Was it by change the same journal that features in Rick Trebino’s “How to Publish a Scientific Comment in 123 Easy Steps”?
    (not an actual question, I know it isn’t)

  5. On the topic of why it is important to publish negative results, I think the answer lies in the fact that “moon causes pollination” will be quoted and cited a lot – and that’s what journals want. Whereas “moon does not cause pollination”, albeit important, will not receive the same exposure. Just lilke “cranberries do not cause yellow fever” would merely receive a “yeah, duh”.

  6. In my opinion, it’s all about a negative stigma associated with negative results. If we can cross this hurdle, then negative results will become more “acceptable”. Also, the notion that currently published papers only contain “positive” data is incorrect. Much of the data is negative, just not perceived that way. May my ideas be useful to some:

    Teixeira da Silva, J.A. (2015) Negative results: negative perceptions limit their potential for increasing reproducibility. Journal of Negative Results in BioMedicine 14: 12.
    http://www.jnrbm.com/content/14/1/12
    http://www.jnrbm.com/content/pdf/s12952-015-0033-9.pdf
    DOI: 10.1186/s12952-015-0033-9

  7. Based on my over-two-decades experience challenging cell division theory which claims one mother cell divides into two daughter cells I know how hard it is to get alternative views published in mainstream journals especially those top journals. Many of my criticisms rejected were later published in Truth-finding Cyber-press’ journals such as Logical Biology and Top Watch for which I serve as editor-in-chief. An open letter to CNS was published back in 2012 which summarized some of my direct experience. This letter can be read at following places:
    https://science4truth.wordpress.com/2014/08/06/an-open-letter-to-cns-cell-nature-and-science/
    English http://blog.sina.com.cn/s/blog_502041670102e9vu.html
    Chinese http://blog.sina.com.cn/s/blog_502041670102e9vw.html
    Details with Figures (1) http://blog.sina.com.cn/s/blog_502041670102eap4.html
    Details with Figures (2) http://blog.sina.com.cn/s/blog_502041670102eaop.html
    Details with Figures (3) http://blog.sina.com.cn/s/blog_502041670102eapi.html
    Details with Figures (4) http://blog.sina.com.cn/s/blog_502041670102eb0m.html
    Details with Figures (5) http://blog.sina.com.cn/s/blog_502041670102eb0o.html

  8. This is the way I “heard” the story:

    “There’s this desert prison…. with an old prisoner, resigned to his life, and a young one just arrived. The young one talks constantly of escape, and after a few months, he makes a break. He’s gone a week and then he’s brought back by the guards. He’s half dead, crazy with hunger and thirst. He describes how awful it was to the old prisoner. The endless stretches of sand, no oasis, no sign of life anywhere.

    The old prisoner listens for a while, then says, `Yep, I know. I tried to escape myself, twenty years ago.’

    The young prisoner says, `You did? Why didn’t you tell me, all these months I was planning my escape? Why didn’t you let me know it was impossible?’

    And the old prisoner shrugs, and says, `So who publishes negative results?’” (Jeffery Hudson, in “Scientist as Subject: The Psychological Imperative.”)

  9. Am I missing something. It doesn’t sound like he had any actual ‘results’ but was just writing what were basically letters to the editor complaining about published papers. In the first case, the point seems to be that the authors had three data points and may have over-interpreted their findings, but that doesn’t add up to a ‘negative result’ on someone else’s part. The second case seems to have something to do with lunar cycles, but again Margot doesn’t seem to have any new data on the points raised. Why should this be considered a ‘negative result’?

  10. Some people, including Rydin and Battarbee, are (wilfully or otherwise) missing some important points.

    First, it’s not okay to dump poorly-reasoned, poorly-tested hypotheses into the literature and expect them not to be challenged on the basis of the quality of the method and rationale. Part of the process of testing hypotheses – arguably the most important part – is simply that they make sense, that they have a rationale, that they are consistent with what we know already. The other part is that the empirical test is an actual test, not just an excuse to publish a “bold” hypothesis. Bold hypotheses are great, but boldness is a virtue only if there is a boldly coherent rationale to match. Otherwise, they’re a dime a dozen. They should not be published in a journal unless they are really tested (unless the proposal is purely theoretical and then it should be coherent and consistent with available facts). If a journal publishes them they should publish valid logical challenges of theory and method.

    The comment by Battarbee that the hypothesis “remained valid” is breathtaking, since the reason it remains valid is that the authors went really easy on it when it came to testing it, and the reviewers rewarded them for their gentle handling (“bold hypothesis, handle with care to prevent breakage!). Because it’s really important that dastardly, would-be critics waste their own time properly testing every half-baked pseudo-theory journals give a pass to.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.