Your experiment didn’t work out? The Journal of Errology wants to hear from you

It’s no secret that it can be difficult to find negative results in the scientific literature. For a variety of reasons, positive publication bias is a real phenomenon. In clinical medicine, that can paint a more optimistic picture of a field than is actually the case. And in basic science, it can mean other scientists may repeat experiments that have already failed.

But the new Journal of Errology, yet to be launched, wants to be a home for experiments that didn’t work out. If it’s successful, it might mean a place where researchers could publish results that don’t look great, without feeling the need to make them look any better — a strategy that can lead to retractions.

BioFlukes, the journal’s Bangalore, India-based publishers, have ambitious goals, according to the company’s Mahboob Imtiyaz:

The aims of Journal of Errology are multi-fold. At the top of our list of priorities is to create a repository for researchers to share their experiences with the future researchers. From our interviews and our own experience, we have found out that failing teaches us a lot more than than success. For a fresh researcher the most valuable resource is his colleagues and other lab companions, who warn him or her of the various pitfalls, from their own experiences or from the lessons imparted from their instructors, colleagues, etc. With the help of Journal of Errology, we can make the world as one big lab, where researchers can share experiences on a global level.

In the last decade, there has been a noticeable fall in the number of ground breaking discoveries and innovations, many going as far as to call it “innovation stagnation”. The average age at which researchers make discoveries is on the rise: this can be realized from the rise in the average age of Nobel Prize winners in every field in the past five decades. The reasons for these are many, however, one among these reasons and one which can be avoided using the available technologies, is the ability to share experiences. Humans are the only species in the known Universe, with the ability to learn from each other’s experiences. It makes no sense for an advanced civilizations such as ours to fall prey to the same mistakes that have already been made before again and again. Publications do not publish these results in order to keep reader interest, and neither do researchers care to share them with other researchers apart from a few close peers and their colleagues.

With this in mind and many more benefits, we are taking the first few steps with the hopes of realization of our goals, hoping to be noticed and taken ahead with the combined effort of researchers around the world.

Those are some tall orders, and BioFluke has its work cut out for it. The journal doesn’t have an editor in chief yet, but has been talking to various scientists, Imtiyaz said. So far, Eduardo G. P. Fox, a postdoc at the Federal University of Rio de Janeiro, who studies entomology, has signed on as section editor for biochemistry, biopharmaceuticals, and cellular biology.

Imtiyaz couldn’t give us an exact launch date, but he said the journal plans to start publishing as soon as their website is complete and they have DOIs.

However, we have already begun accepting submissions and have received a good response, with a few submissions already received.

We pointed out that there are other similar journals, such as the Journal of Negative Results in BioMedicine (JNRBM) and the Journal of Interesting Negative Results in Natural Language Processing and Machine Learning. How would the Journal of Errology differentiate itself?

Unlike journals such a JNRBM or “all results” journals, which accept completed research papers, we accept other unpublished hypothesis,errors (inexplicable or frequent), stumbles and other problems that were overcome during the course of a successful research. We aim to create a place for researchers to discuss on these results, vote and recommendations from other researchers. Submissions to our site can be made either before or after publication of the paper in another research journal. Also unlike Jnrbm, we do not charge for submitting articles and the articles can be accessed freely by all researchers.

That led to our next question. If the journal isn’t charging subscribers, nor authors who submit, how would it fund itself?

We have other sources of income such as “Innovators4Hire” – a challenge based candidate screening system, and our lab equipment review blog “Lab Critics,” We are also looking at advertising in the near future.

BioFlukes hasn’t published any other journals, but Imtiyaz said the company is being supported by the Society of United Life Sciences, which publishes the Scholar’s Research Journal. That journal has published one issue, for January-June 2011.

We’ll keep an eye on the Journal of Errology.

30 thoughts on “Your experiment didn’t work out? The Journal of Errology wants to hear from you”

  1. Great find! I think I should submit the story of my life to them! Jokes apart, I hope this is not a scammy thing. I hope this lives up to the tall promises being made… once again thanks a ton for unearthing this new journal… we shall all have our eyes trained on this one!!!

  2. How could this journal ever account for random/human error? I can think of a number of failed experiments I performed when I was a grad student which had nothing to do with the nature of things and everything to do with my inability to correctly perform the experiment. Those results were never be published as they could not help anyone.

    1. If interpreted wisely, you could make those errors into a list of possible errors that might be caused by inability/unawareness. This would be a good contribution to those who are new on the lab bench to understand that it is important to be-aware and to be-able (capacity building) for effective research.

  3. Reporting negative findings could be very useful in synthetic chemistry, particularly if they are from research groups known to be reliable/rigorous in their efforts. Alternatively, place the unsuccessful efforts in the supporting information.

  4. What is the point of publishing there anyway? If it’s a good study that led to null results you can still publish it in a good journal. I mean, assuming you have enough power, which you should have (if it was a good study). It’s only null-result studies with insufficient power that cannot be published anywhere. Those studies may only be useful for a meta-analysis and so perhaps there should be a journal that publishes them. Researchers who receive federal funding should be required to publish all null results.

    1. Your ideas may work in a field like medicine, but aren’t really transferable to most laboratory science.

      Sierra Rayne mentioned synthetic chemistry: “The synthesis was very promising but yielded a mixture of hard-to-characterize crap that lacked the NMR peaks characteristic of the compound we were trying to make.” Not enough power? “We tried the same procedure thirty times (wasting six months), and still got a mixture of hard-to-characterize crap that lacked the NMR peaks characteristic of the compound we were trying to make.” Still not publishable. How about a meta-analysis? “We have now presented multiple ways to screw up a synthetic procedure.” It’s called a group meeting. Not publishable.

      (And, could you imagine how long the methods sections would be of papers that published *all* of their negative results? In my own work, we’re probably talking 50-100 pages per paper if I was being thorough.)

      The reason that it’s hard to publish negative results in laboratory science is that it’s hard to interpret what they mean. In medicine, the experiments are often one step and the design is simple: one group gets the drug, one group gets the placebo, measure some observable, test for a difference. The simplicity of the design seems necessary, given the variability of the human population. If you make sure you’re N is high enough for a given effect size, the negative result is reasonably easy to interpret.

      In contrast, consider a complex, multi-step synthesis. You get black goo. Now you get to guess… is it because (a) the glassware wasn’t dry enough, (b) the atmosphere had too much oxygen, (c) traffic made you 15 minutes late in step 6 of an 11-step synthesis, (d) something else is wrong, or (e) the synthesis is impossible.

      Often the reason for the negative result is only clear after a positive result has been obtained: “Oh, we need to run this under an argon atmosphere; it actually reacts with nitrogen.” Sometimes it never works, and the researcher has weeks or months of unpublishable negative results. A journal like “The Journal of Errology” would at least take some of the sting out of the dead-ends.

      1. Quite the contrary ‘sfs’, I’ve heard several well-known synthetic chemists speak to the value of so-called negative results being published in some forum, provided there is some credibility and guidance behind what did and didn’t work.

      2. @Sierra Rayne:

        I do agree with you. The key is “credibility and guidance.” Negative results that are well understood are quite useful. The problem is that most negative results are not well understood. It takes a lot of time to control for the sources of error. Admittedly, most of research involves this kind of trouble-shooting. The question is often whether to switch to a more lucrative project, or to *really* understand why things are broken.

  5. There are other journals publishing negative results like The All Results Journals (http://www.arjournals.com). They are published by the non-profit organization Society for the Improvement of Science and have a very competent editorial board in many fields. The formality of this one, from my point of view, is much better than errology journal.
    Lewis

  6. The obvious questions:

    1) Is there a negative career stigma attached to publicly declaring yourself as someone whose experiments fail and whose ideas are wrong?

    2) Given that researchers have limited amounts of time and energy, and that writing an article for publication consumes both time and effort, will the same career rewards be offered for publishing errors as are offered for publishing successful results? If not, then why would a researcher commit time and energy writing up mistakes that could instead be used toward doing new experiments that may prove more successful?

    1. I’m not sure about (1). Given that laboratory research often takes a lateral turn, I would say that most initial ideas are failures, but a good scientist can examine the failure and correct the inquiry toward something more successful.

      As for (2), I think most negative results are appropriately filed in the circular bin. By the time you’ve nailed the reason, you could have done something far more productive. However, I’ve seen and heard about a few cases where the result was deemed important for the lab, and some real time was spent in investigating its failure. The time to write a paper is usually much shorter than the time to do the research, so if you’ve got a carefully-constructed and compelling negative result, then why not?

      1. I see the obvious value in having such information available, I just have a hard time seeing the motivation for someone taking the time to write it up. Will a tenure committee look at your article that troubleshot your failure as something to reward or are they more likely looking for successes?

        Combine that with the somewhat “macho” culture of science where admitting you’re wrong is often seen as a sign of weakness, and it becomes even more questionable. I mean, who wants to become known as “the guy who accidentally sneezed on his plates”?

      2. @David Crotty:

        Your concerns are very well-placed. I doubt that negative results would get you tenure. On the other hand, I doubt that anyone would blink at a few negative results in a sea of solid, positive results.

  7. I think reporting unexplained experimental artefacts is also a good thing that many here seem to have forgotten.

    That slight modification on your protocol that makes the thing work for no apparent sensible reason may be the aim of a paper to be published in JoE. Who did not ever see something really really weird taking place repeatedly, and diligently avoided doing it or added it to the method description without further questions?

    I think this is a hot ground for discussion in JoE, and certainly no-one would frown at that.

    1. This.

      I’ve noticed that there’s a certain amount of “magic” in cell biology protocols. Determining what constitutes the magic seems like a thankless task. If this kind of work could be somehow distributed over multiple groups, perhaps progress could be made…

      1. But there’s a difference between publishing a method (and one that works) and publishing experiments that failed and theories that were proven wrong. There are some great journals for methods publishing and most of these include troubleshooting steps in their articles (and I should know, I used to be the editor of one http://www.cshprotocols.org).

  8. “I can think of a number of failed experiments I performed when I was a grad student which had nothing to do with the nature of things and everything to do with my inability to correctly perform the experiment. Those results were never be published as they could not help anyone.” – Chad

    And now that you don’t work in the laboratory you see all failed experiments as the laboratory workers inability to correctly perform the experiment.

    Robert Milikan measured the charge of an electron and got a value that was too low due to an incorrect value for air viscosity. The reported electron charge values gradually increased in years to come. When researchers obtained a value higher than Milikan’s – the correct value – they searched for ways to explain their “error” and didn’t report the data. Values closer to Milikan’s were readily reported. Eventually the accurate charge was obtained. This just one example of how a journal of Errology can help science get to the truth faster.

    How many scientists in this story thought like Chad? They were sure that they screwed up and no one could benefit from their findings. Not even the guys half way around the world who got the same exact results.

    1. I know that this happens, but my personal experience is that most examples of “things not working” turned out to be either (a) I wasn’t careful enough, (b) there were factors that are unaccounted for, or (c) the idea was wrong. This is especially true in graduate school, where students are often still learning the degree of care that certain measurements need.

      If everyone published all their results that didn’t work, we’d have journals littered with papers like http://pages.cs.wisc.edu/~kovar/hall.html

      In that context, correcting the Millikan experiment would be almost akin to looking for a weak signal in a sea of noise. What we need are negative results that we can trust, not a large body of negative results that largely reduce to unknown factors or operator error.

      1. It is self-evident that any reputable journal publishing negative results would need to be reliable and informative. Thus, the ‘NMR of tar’ and ‘Ge band structure by a Comp. Sci. major’ examples are straw men arguments, and pointless. Perhaps there is some other discussion that is advocating publishing this nonsense, but that is not what is being discussed here.

      2. @Sierra Rayne:

        The Kovar/Hall article was an attempt at humor; apologies that it fell flat for you. The responses you characterize as strawmen were directed at statements that advocated broad publication of negative results. I’ve seen no real standards proposed here for what would make a negative result publishable.

        “It is self-evident that any reputable journal publishing negative results would need to be reliable and informative”

        Reliability is a strange concept with a negative result in laboratory science. With a positive result, you demonstrate that the result itself is reproducible, possibly using multiple techniques. For a negative result, it’s more complicated than running it 100 times and watching it consistently fail. To be reliable, the article would have to seriously investigate why a technique did not work. To be truly reliable, the article would have to show exactly what factors caused it to fail. This starts to sound a lot like a series of positive results, and if the work is truly informative, it’s publishable in a more traditional journal.

        I think that there are remarkably few circumstances that justify the time invested in writing articles on negative results. That time is probably better spent redefining the project or pursuing a new one. Sometimes that’s not an option, so it’s nice that a journal like this exists.

    2. The point I was trying to make was that negative results can simply be due to an inability to correctly perform a certain experiment. For example, I spent many months trying to synthesize a certain gold complex and failed many times. As it turned out, this synthesis required an intimate understanding of air-free synthetic techniques, which I was not taught and subsequently learned the hard way. Because I failed the synthesis many times, I could have easily and incorrectly chalked it up to “this just doesn’t work” and published it a journal like the one proposed, when in fact it was because I just didn’t have the proper technical skills at that point in my graduate studies.
      As I do indeed continue to work in the laboratory, I’ve found that due diligence is crucial; if you’re experiments fail, chances are good that your technique may be to blame.

  9. In psychology, most effects are hard to replicate. Papers do not tell you how many parameters were tried before the results came out that way, supporting a sexy story. They make it sound like the phenomena are robust but when others try to replicate, they cannot.

    1. Well, as this blog has showed quite clearly, Psychology needs to be reformulated.

      Maybe this discussion (which certainly would not fit in a current Psycho journal) is good material for a thought-procative editorial in the Journal of Errology. Call it Errology in Psychology: An Epistemological Critique…

  10. Quite the contrary, ‘sfs’, your statement that “[t]he responses you characterize as strawmen were directed at statements that advocated broad publication of negative results” is, once again, inaccurate.

    One of the strawmen I referred to was your ‘NMR of tar’ reply to my statement that “[r]eporting negative findings could be very useful in synthetic chemistry, particularly if they are from research groups known to be reliable/rigorous in their efforts. Alternatively, place the unsuccessful efforts in the supporting information.”

    Clearly, it is not reasonable to characterize my statement as one that “advocated broad publication of negative results.”

    1. @Sierra Rayne:

      I think you misunderstood. My “NMR tar” post was in response to Loris Kant, who did advocate broadly publishing negative results with language that sounded very specific to clinical studies. The point I was making is that negative results in the laboratory setting (e.g., synthetic chemistry) are much harder to interpret than those in clinical studies.

      Look, I really do agree with you that there’s a body of negative results that should be out there. However, it takes a lot of effort to truly understand a negative result. Historically, not many prominent synthetic chemists have been willing to take the time to write up these kinds of results, and I don’t think that it’s because of the lack of journals.

  11. I am happy to see that the Journal of Errology is such a hotly debated topic. We are still beginning and thus all comments and any ideas are truly invaluable at this point.

    I would like to emphasise that I personally do not see any advantage in publishing trivial experimental mistakes devoid of intriguing developments or discussion, especially if they cannot be reproduced. However sometimes we err, and in doing so, we find things we were not looking for. I should remind you that radioactivity was first observed this way. In my personal experience, I have had many moments in which I made a mistake and found an interesting result, often with useful applications. For instance, jumping one step of the silver-staining protocol for gel electrophoresis of proteins has yielded me a much more sensitive detection of trace bands. Not a beautiful result to show, but certainly hinted things I could not see properly before. Even a precipitating substrate, apart from ruining your enzymatic assay, can greatly help purify your enzyme of interest (or of others).

    I think science is not exactly about always conforming to the current established standards and beliefs as to please colleagues and then be generally accepted (and hopefully, cited). By questioning the paradigms, protocols, previous finds, some positive results are bound to surface. And of course, a lot of resistance. I think this is essential for good science.

    Many facets of well-known phenomena of all fields are actually yet quite unexplained.

    Also, I would like to add that we are discussing alternate ways of presenting the published material, as to make the journal more dynamic and transparent (comments, voting, tags, linking, etc). As the scope implies, the journal is aimed at discussing the observations of others, and not merely reading without a question when searching for something to cite. This certainly will be another difference from other journals, as we intend to question some publishing practices.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.