Open science journal F1000Research posts its first retraction

f1000researchAn honest error has prompted the first retraction of a paper published in F1000Research, a relatively new open science journal that publishes all articles before peer review and then solicits such review.

Here’s the notice:

The authors of this article, Dr Evelio Velis and Dr Graham Shaw, would like to retract this article [Velis E, Shaw GP. (2013) The AIDS epidemic in south Florida: black non-Hispanics in our communities remain increasingly vulnerable [v1; ref status: awaiting peer review, http://f1000r.es/1wjF1000Research 2013, 2:236 (doi: 10.12688/f1000research.2-236.v1)] from publication in F1000Research. The authors have been informed that the data provided by the Florida Department of Health, on which the conclusions of the article were based, were incomplete. The authors apologise for this case of honest error. The authors are now examining the complete data set and will republish their findings when their analysis is completed. The data set that accompanies this article, AIDS Cases in Miami-Dade County, 1993 to 2011, http://dx.doi.org/10.6084/m9.figshare.834938, has also been marked as retracted in the figshare repository. F1000Research, 12/11/2013.

The journal said it has published more than 300 articles since launching in July 2012. They seem to have put a lot of thought into their approach to retractions, and tell us:

We’ve taken the COPE and other retraction guidelines into account when formulating our retraction notice. As this article had not yet been peer-reviewed (and therefore not indexed), we hope that our method of retraction is clear. For retractions for any articles that have been peer-reviewed, we’ll have a different strategy.

As you’ll see, the paper is now only accessible directly through the doi or URL  or by browsing our list of published articles. As we have Crossmark implemented on the site, it should be clear that the paper is retracted. We’ve also added details of the retraction to the data set hosted by Figshare.

6 thoughts on “Open science journal F1000Research posts its first retraction”

  1. Looking at the F1000 Research publishing model the term “published” has quite a strongly differently nuanced meaning to that used in the traditional publishing model. In the traditional system, a paper is only “published” after it has been submitted to a journal, screened by editors and/or peers and gone though some sort of editorial processes. The physical appearance, in print, or OA, of that revised paper would then grant it the stature of “published”. In the F1000 Research model, a paper that is allowed to pass following a pre-screening by what some have to criticize as being a rather biased and elitist panel of “experts”, but without any traditional editorial processes, is considered to be “published” the minute it hits the internet. This is usually accompanied by an automatic DOI (costs of which are of course covered automatically by the high “publishing” fee). So, once payment is received and initial approval received, the paper is basically automatically published making it a pay-to-publish model, in fact. Only then does peer review take place. So not only are the processes of “publishing” and “review” inverted, so too is the process of quality control. What happens if there are two negative reviews and only one postive review, as I have seen in some F1000 Research papers? In the traditional publishing model this would equal a REJECTION, i.e., non-publication. But F1000 Research challenges this basic quality control principle by basically stating that even though your automatically published (i.e., posted) paper may receive criticisms by the peer pool, that it stays published. Is my interpretation correct? Maybe someone from F1000 Research needs to get on RW to explain this case better. This poses new challenges to the publishing world, including to retractions. If looking at the story from a “traditional” perspective, one could actually say that this paper was retracted even before it was published. This in itself is quite a concept to wrap your brain around. It seems VERY odd that the Florida Department of Health (FDH) did not know that this paper was being submitted to F1000 Research at the time of submission. Did the authors not inform the FDH that they were submitting the paper or were they not required to inform the FDH (which seems impossible considering that they were using FDH data)? Is this just a case of rushing towards the checkered flag without first checking if you have gas in the tank?

    1. One the positive side, this model prevents 2 classic editor/reviewer moves 1) an about to be scooped editor/reviewer stalls the review process so that they can publish their own paper somewhere else and be first 2) an editor/reviewer at a high profile journal whose theories would be undermined by a publication finds BS wrong with it and prevents it from getting into any journal that sends it to them. The anti-scooping benefits are great.

      Alternatively, with publications not having to pass multiple peer reviews… this model could lead to serious resume padding. Quite a bit of investigating will have to be done on every individual paper on a resume from this journal. How will funding bodies treat “published” but not yet peer reviewed articles? Or 2 good 1 bad reviewed papers? Are admissions committees going to be able to discriminate, especially ones that aren’t well staffed by people in the science know?

  2. Just to correct some confusion in our model, we should clarify that our pre-publication checks are conducted by the editorial team, not the F1000Prime Faculty of experts. They conduct all the standard editorial processes as on any other journal. You are correct that articles cannot be withdrawn once they are published but this acts as its own disincentive for authors to submit poor quality work as the invited reviewers’ comments are then very public. Currently, most science can become published in a ‘peer reviewed journal’ if the authors are persistent enough. The F1000Research model makes any criticisms of the work public so readers can clearly see the concerns with the article. This stops such criticism being hidden from view through anonymous prepublication peer review, and then the article being published anyway.

    1. Dear F1000 Research. It is excellent that you have responded on this blog because it shows that you care about the issue of retractions and that in fact you are following the blogosphere carefully. It also shows a realistic and open interaction with your base (possibly also because you are in a phase of establishment). Good for you and for the company image. Regrettably most of the other publishers have “retracted” to the shadows and fail to provide comment and responses in a public forum such as this one (which reports on their retracted papers), which could actually benefit their image if they actually bothered to face their critics. Allow me to make some of my concerns about F1000 Research public here, and allow you to respond, if you are willing and/or able to (please bear with me). My area of specialization is plant science, very broadly. In looking for papers published on plant science at F1000 Research, very broadly, I entered the term “plant” into your search function. 42 hits appeared. I examined them. About 35 were directly related to plant science (pure) while others were applied, or marginally related.

      Issue 1) I did notice quite a lot of papers published on Arabidopsis, which seems to suggest that either many Arabidopsis papers are being submitted, or that many Arabidopsis papers are being given priority by the “editorial team” during the pre-selection process. Which of my assessments would be correct? It is well known that papers on this model plant would enhance the profile of the journal and ultimately lead to a good impact factor, as such studies would be well and widely referenced (see Wiley’s Plant Journal, for example).

      Issue 2) I saw primarily scientists from the US and Japan, and a couple from some other countries, primarily in the EU. Except for collaboration projects involving scientists from developing countries (excluding China), I did not see a single author from any developed country. Admittedly 42 papers is a tiny sample, but in fact F1000 Research has already been publishing for approximately 15 months, which would be ample enough time to attract scientists from all 193 countries. Would it be correct to say that three main reasons why you have not been able to attract more plant scientists (I cannot answer for other fields of study) is because: 1) they are extremely reticent about (and perhaps ignorant of) the F1000 Research publishing model (preferring more traditional OA models such as BMC journals); 2) they are concerned about some of the issues that I have raised here; 3) they are not willing to pay your OA fees, even if the fees are lower than OA fees charged by PLoS, BMC, or even traditional publishers’ (e.g., Elsevier, Springer, Wiley, Taylor and Francis, etc.) OA prices simply because the F1000 Research publishing platform is too new and too experimental (i.e., they are still not confident about it). As you are clearly aware, there are also new and excellent competing models such as the Frontiers model which allows a similar step by step, open style peer review, but with the assignment of only one doi and with slightly higher OA fees. I should add that studies like that published by Bohannon in Science, and awareness of widespread fraud in the OA movement, as shown by, for example the Beall blog (www.scholarlyoa.com) do not actually help to build confidence among plant scientists. Have you any comment about these issues?

      Issue 3) There were one or two papers that had already been “published”, for example this one (http://f1000research.com/articles/2-214/v1), but that are labelled as “Awaiting peer review”. Yet, in your rebuttal above, you claim that they have already been pre-selected by your “editorial team”. If your editorial team has already conducted peer review, then why require another “peer review”? Alternatively, if the published status of a paper is only determined after “external peer review”, then why even bother having the “internal editorial team” look at papers and “select” them. This pre-selection might be perceived as bias. More so because the name(s) of the editor(s) on the F1000 Research editor board that “approved” the paper for publication do(es) not appear anywhere, which indicates a gap in transparency in an editorial system that purportedly is supposed to show a more transparent model of publishing. Could you perhaps comment on the need for these “editors” at the pre-selection stage. Also, how does their function differ from that of the F1000Prime Faculty of experts?

      Issue 4) Sticking to the example in Issue 3, I noticed that the paper was first “published” online on October 14, 2013, so more than one month ago. (Note: I am not picking on this paper or its authors, simply because it was the first of this category that appeared on page 1 of the hits). Yet, no external peers had yet reviewed the paper. How does F1000 Research ensure that appropriate reviewers are found, and that external peer reviews are carried out in a “reasonable” amount of time? Still staying with this particular paper, I noticed that it was assigned a doi (doi: 10.12688/f1000research.2-214.v1) for version 1. Let’s assume that (in a radical case), three external peer reviewers provide comment, and that each suggested major edits, including to the title. By the time the paper is actually published, so v3 or v4, each version would have a different title different abstract, different content and different doi. How then does one actually reference the actual paper? Using the last doi? If so, what’s to say that a 5th or a 6th external peer reviewer will not prvide additional comment and that the title and content will change again? Does F1000 Research actually allow peer reviews beyond 4 peers? Is this an ad infinitum process?

      Issue 5) Your Editor Board page is massively confusing, to be honest: http://f1000research.com/advisory-editorial. You have no less than 5 advisory groups and/or editorial panels. Why such a complication? What additional benefit would such a strange structure bring to F1000 Research than let’s say, one editor board of 50-100 experts in each main field of science. For example, I got red eyes trying to identify who were the plant science specialists on your 5 panels/boards. Frankly-speaking, I am not impressed by the “big names”. I would be more impressed by a wider, more representative editor board that shows a wider geographic and topical diversity. The current structure breathes elitist properties and seems to only cater for an equally elitist clientele that can pay for the expensive OA fees. Any comments on this editorial structure and the need to keep the boards/panels so elitist.

      Issue 6) This is one paper of great concern to me, and possibly the biggest link to retractions and why I personally perceive F1000 Research to be potentially “dangerous” to the world of science publishing. I turn to hit number 42, of 42. A paper by Y-h Taguchi (http://f1000research.com/articles/2-21/v3). The paper was peer reviewed by three specialists from the US, France and Norway, all who appear to be knowledgeable of the topic, i.e., valid peers reviewers. Examination of the peer reports and of the author’s rebuttals (http://f1000research.com/articles/2-21/v3#article-reports) reveals quite a profound critique and intense rebuttal. However, if we were to assume the traditional publishing model. So, for example, were the author to submit this paper to something like The Plant Journal (Wiley), Journal of Experimental Botany (Oxford University Press) or even to Plant Science (Elsevier), most likely, after the editor received the three peers’ reports (one acceptance + 2 rejections), the paper would have, without a doubt, been rejected, possibly even without the possibility of re-submission. Yet, at F1000 Research, this author was given what may be perceived by some as an unfair advantage against two rejection notices. Admittedly Taguchi does appear to address most of the issues and shortcomings raised by the two peers that rejected the paper, but the ey question is, “is the paper that was resubmitted on November 9, 2013” the final, accepted version? Perhaps only some more time will tell. In such an up-and-down peer review, what is F1000 Research’s position regarding acceptance and rejection? How would your definitions differ from the “traditional” publishing model? As you can see, the practical aspect is already something transcendental and would be open to critique almost something along the lines of Nature Preceedings, which ceased, I believe not because of “changes to technology”, but because of awareness about publishing ethics-related issues. If F1000 Research “accepts” and “publishes” papers that would under the normal or traditional model be “rejected” or even retracted if found to be faulty post-publication, how does F1000 Research actually explain maintaining such a paper online, with at least three doi’s having been assigned?

      Issue 7) In the case of Issue 6, where major edits were made by the author, does the author pay a full OA publishing fee for EACH version? What if a paper is rejected, or using F1000 Research euphemism “not approved”? Does F1000 Research reimburse the author(s), or does it retain the OA publishing fee, in exchange for the doi assignment? In other words, are you actually charging a publishing fee, or a processing fee? If the latter, then what percentage of the costs go towards editor and/or peer reviewer salaries? If the former, then what percentage of the profits go towards author royalties?

      Issue 8) I entered the term “reject” into the search function, and from 26 hits, mostly related with some form of rejection in the experiment itself, one did in fact appear to relate to a rejected paper, a microbiological study: http://f1000research.com/articles/2-87/v1. The referee status is “not approved”. Yet, this paper seemed to have somehow gone past the best 1000 researchers on this planet! Yet this paper has been assigned a doi. And yet this paper has been given the privilege of receiving a “How to cite” status: How to cite: Mohan V, Stevenson M. (2013) Molecular data analysis of selected housekeeping and informational genes from nineteen Campylobacter jejuni genomes [v1; ref status: not approved 1, http://f1000r.es/um%5D F1000Research 2013, 2:87 (doi: 10.12688/f1000research.2-87.v1) – See more at: http://f1000research.com/articles/2-87/v1#sthash.Uis4IGR2.dpuf. Personally, I find this hard to believe. How can a rejected paper be treated so regally? Would this not be a classical case of a paper that does not have the academic or scholarly qualities to be published, but was published anyway? Why should this paper not be retracted and its record wiped off the academic history book?

      Disclaimer: I work for no institute, publisher or editor board, so these queries serve only a personal interest, and not the interests of any third party. I ask these critical questions after in-depth analysis of your site and publisher (business) within the context of the RW story related to F1000 Research and the first retraction that appeared and which peaked my curiosity. I believe that the answers could benefit those who are curious about this publisher, but have concerns or reservations.

      1. In response to your questions (and additional information is available at http://f1000research.com/faqs):
        1) We only reject articles following the pre-publication checks if they are clearly not science (or life sciences), are in poor English, plagiarised, don’t meet ethical standards etc. Hence, there is absolutely no bias with regards topic of the content. The high proportion of Arabidopsis papers is therefore just a reflection of the articles that have been submitted to us.
        2) If you look across the full corpus of articles we have published, we have many submissions from across the world; some fields attract more submissions from different countries compared with other fields. We have only published just over 300 articles so 42 articles being in plant science could be considered a fairly high proportion. The journal is building momentum but there are of course challenges such as being new. Like any new journal, we don’t have an impact factor yet (as you can’t have one for a couple of years), we also use a very different publishing model, and we are still working hard to extend awareness of the journal.
        3) As stated in 1) above, the editorial checks are just that – we are not peer reviewing articles or making any judgement on them except to make sure they aren’t complete nonsense as far as we can see and making sure the English is good (so we don’t waste referees’ and readers’ time), and that they are not plagiarized, we have the data underlying the articles, ethical standards are being met etc. As we are not doing any peer review, this should not incur bias into the system. The Managing Editor is clearly listed on our Contact page and she, and her team, check all submissions so there is nothing hidden here. The F1000Prime Faculty have nothing to do with F1000Research – we are 2 separate companies. Some scientists who are on the F1000Prime Faculty also happen to be on the F1000Research boards but these are two very distinct roles.
        4) We work hard to keep inviting referees and chasing them. Some papers take significantly longer to get refereed compared with others (as is the case for all journals). However, we continue to work until we get some referee reports. Technically the referee process may never stop although the reality is that most stop after 1 or 2 rounds, occasionally 3. Each version is citable which means that you should cite the version you read. It is then clear which version you are referring to and the reader can go back to that specific version. When the reader goes to that specific citation, a note will appear to make the reader aware that new versions are available.
        5) We are currently working to revamp these pages to make it clearer and, as part of that, we may combine some of these boards to make it simpler. We are also aware that in some fields, we need to expand the board to provide better coverage so we will be working on this in the coming months.
        6) In the traditional approach, this paper would indeed have probably been rejected by the first few journals, but it would likely be published in a peer reviewed journal somewhere and then treated as ‘correct’ by many when citing it. Here, as a reader you can read the referees concerns, you can read the author rebuttal, and you can ultimately make up your own mind. Whenever you cite this article though, it is clear in the citation that 2 of the reviewers weren’t happy. The version 3 will be sent to the referees to ask them to confirm whether this new version and the authors’ responses adequately address their concerns. Their assessment of the author’s changes will affect the citation of this third version and determine whether it becomes indexed or whether the author decides to produce a further revised version.
        7) The authors can revise their article as many times as they wish and they pay no further fee beyond the original processing fee for the first version of the article (which are modest compared to other major Open Access publishers). These fees are used to cover the publication costs – data hosting, editorial checks, production costs, editorial costs to ensure the article is reviewed (and sometimes re-reviewed) – sometimes multiple times for multiple revisions of the same article.
        8) Only 1 referee has looked at this article so far – there are plenty of examples where 1 referee said to reject and several others said to approve (e.g. http://f1000research.com/articles/2-8/v2). It would be too early to make any judgement on the article at this point and the fact that the article title itself (and hence the article’s citation), includes the words ‘not approved 1’, makes the peer review status of this article very clear should anyone decide to cite it (which would be unlikely with a citation like that). Once an article receives 2 ‘not approved’ referee statuses and only ‘not approved’ statuses, we remove it from the default search on the site to reduce its visibility. Yes, it does still have a DOI and can still be cited but again, it would be a very odd citation to use because the peer review status is part of the article’s title.

        1. I should respond, as I posted the questions. I wish to thank you for that enlightening response, which does reflect some weaknesses, but these most likely need to be observed in context. It was, however, brave, and responsible of you to provide a detailed, timely and frank response to all queries. This is exactly the type of attitude that your competitors such as Elsevier, Springer, Taylor and Francis+Informa, Wiley, and others are lacking. They are watching this blog carefully, because they know that they are being watched and scrutinized carefully too, yet they have FAILED to address the critics in a manner that you at F1000 Research have done. With every passing day that these publishers step one step away from transparency, scientists are also stepping one step further away from their traditional mind-set and moving towards the F1000 Research-type model. Your openness and willingness to engage with critics in the base, i.e., scientists who contribute to your journal, is essential moving forward. While the big, traditional players are trying to cement their control on the publishing industry through this rather pathetic attempt at ownership of ethics, they are rapidly losing scientists, clients and trust as they gradually expose their hypocrisies, and in fact, lack of ethics. So, although I do not have the money to publish in F1000 Research, although I would like to, you get my thumbs up for responding. Let’s hope that, as time progresses, you do not get arrogant as you obtain an impact factor (or some other pseudo-quality metric) and forget to always address the critics at the base. As someone once said (sorry, I don’t know who): “While reaching for the stars, never forget the flowers at your feet”.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.