Science reporter spoofs hundreds of open access journals with fake papers

scienceAlan Sokal’s influence has certainly been felt strongly recently. Last month, a critique by Sokal — who in 1996 got a fake paper published in Social Text — and two colleagues forced a correction of a much-ballyhooed psychology paper.  A few days after that, we reported on a Serbian Sokal hoax-like paper whose authors cited the scholarly efforts of one B. Sagdiyev, a.k.a. Borat.

And today, we bring you news of an effort by John Bohannon, of Science magazine, to publish fake papers in more than 300 open access journals. Bohannon, writing as “Ocorrafoo Cobange” of the “Wassee Institute of Medicine” — neither of which exist, of course — explains his process:

The goal was to create a credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable. Submitting identical papers to hundreds of journals would be asking for trouble. But the papers had to be similar enough that the outcomes between journals could be comparable. So I created a scientific version of Mad Libs.

The paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers. Other than those differences, the scientific content of each paper is identical.

Bohannon then combed the Directory of Open Access Journals (DOAJ) and Jeffrey Beall’s list of possible predatory publishers, using various filters:

The final list of targets came to 304 open-access publishers: 167 from the DOAJ, 121 from Beall’s list, and 16 that were listed by both.

The results?

By the time Science went to press, 157 of the journals had accepted the paper and 98 had rejected it. Of the remaining 49 journals, 29 seem to be derelict: websites abandoned by their creators. Editors from the other 20 had e-mailed the fictitious corresponding authors stating that the paper was still under review…

Bohannon’s analysis, which goes into far more depth, demonstrates an appalling lack of peer review and quality control at the journals he spoofed. But it’s important to note, given the heated and endless debates between open access advocates and traditional publishers, that there was no control group. Bohannon agreed that was a limitation when we asked whether he had considered, as one of his sources suggested, the same spoof with traditional publishers:

I did consider it. That was part of my original (very over-ambitious) plan. But the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample? Instead, this just focused on open access journals and makes no claim about the comparative quality of open access vs. traditional subscription journals.

Still, we will not be surprised if some traditional publishing advocates use Bohannon’s sting as ammunition to fight wider adoption of open access. That gunpowder may be a bit wet, by the way. Bohannon writes:

Journals published by Elsevier, Wolters Kluwer, and Sage all accepted my bogus paper.

And Retraction Watch readers may recall that it was Applied Mathematics Letters — a non-open-access journal published by Elsevier — that published a string of bizarre papers, including one that was retracted because it made “no sense mathematically” and another whose corresponding author’s email address was “[email protected].”

Retractions, as readers may guess — and perhaps hope — will be forthcoming now that Bohannon’s sting has been revealed. Here’s part of a message the Open Access Scholarly Publishing Association sent its members earlier this week:

In the event that your publishing organization has accepted and published the article, we expect you to follow recognized retraction procedures. if you require any assistance or guidance on retracting the article, the OASPA board will be happy to assist with this. In addition, should it be the case that OASPA members have published the paper, we will prepare a retraction notice/explanation that your organization may choose to use.

We’ll see if this changes the mind of the editor of the Journal of Biochemical and Pharmacological Research, who shrugged when “Cobange” told him the paper was fatally flawed and should be retracted. A correction would do, he said.

131 thoughts on “Science reporter spoofs hundreds of open access journals with fake papers”

  1. Reblogged this on Brian M. Lucey and commented:
    An Irish newspaper recently published a fairly glowing report in the end sourced from a pay for publish journal. Now, the research may be fine but it struck me that this needed to be noted. The journalist, well known in the area, when contacted by me and when I noted the “you pay, we publish” nature of this journal and its publisher being on Bealls list of predatory publishers (http://scholarlyoa.com/2012/12/06/bealls-list-of-predatory-publishers-2013/) kept saying “but its peer reviewed”. Despite much attempts to get this well known journalist to note that not all peer review processes are the same, it failed. Well done Science .
    FWIW, the publisher of the journal of which I noted is also one of the ones on the Science list

  2. Also worth quoting:

    “Some open-access journals that have been criticized for poor quality control provided the most rigorous peer review of all. For example, the flagship journal of the Public Library of Science, PLOS ONE, was the only journal that called attention to the paper’s potential ethical problems, such as its lack of documentation about the treatment of animals used to generate cells for the experiment. The journal meticulously checked with the fictional authors that this and other prerequisites of a proper scientific study were met before sending it out for review. PLOS ONE rejected the paper 2 weeks later on the basis of its scientific quality.”

    1. Exactly. There is a lot of bashing of PLoS ONE on this site, but my field has had several great papers in this journal. This quote moves PLoS ONE out of the typical group of open-access publishers.

    1. Oh the morality of academia. What nobility.

      The review process is one thing. What the scientific community really needs to address is the plethora of papers by scientists who publish bogus results, just because a phd student will not get their degree unless they prove that the hypothesis their boss has presented to a foundation in order to receive a grant is right. The scientific community needs to address a broken system driven by money and fame hunting. The scientific community needs to accept “error” in “trial and error”. Academia has divorced science. If it hadn’t most papers would publish failed hypotheses. When was the last time you saw a paper with an unintentionally (ie. not by design) failed hypothesis? Especially in high impact factor journals. Every time you read a paper keep in mind that some (literally poor) student’s degree relied on getting things to work the way they were meant to work in some academics mind, sitting behind a desk. And until it does work, no degree for you.

      P.S.: The magazine Science attacking other journals is laughable.

      Comment not intended as a reply to anyone above

      1. Absolutely correct,Nikola. All hypotheses are intelligently designed and therefore, they don’t fail.

      2. Postfacto setting up a hypothesis in order to ‘falsification’ is often facile, and seen quite commonly in biology. There are two alternatives, the absurd and the favoured. What a surprise, the absurd is rejected by experiments. I know of several examples particularly in journals seen as ‘important’.

  3. I suspect that traditional journals accept a similar number of bad studies. I have requested the retraction of a paperby Parmesan, C., et al. (2000) Impacts of Extreme Weather and Climate on Terrestrial Biota. Bulletin of the American Meteorological Society, vol. 81, 443?451 because the authors blamed extreme weather for the extinction of a population of butterflies that had recently colonized a logged area, while a few hundred meters away, a natural population thrived during the same weather. They balmed extreme weather but omitted the success of the natural population. Then their faulty conclusions were used in paper by climate scientists (Easterling, D.R., et al. (2000) Climate extremes: Observations, modeling, and impacts. Science, 289) to suggest extreme weather was causing extinctions and that paper was cited by thousands. Read http://landscapesandcycles.net/fabricating-climate-doom—part-3–extreme-weather.html

    1. Comparing PLoS One with journals that appear on the Beall list of OA publishers is like comparing chocolate mousse with raw egg. The former is refined, and is aimed at perfection, accepting flaws whenever red flags are raised. The latter is raw and tasteless, but is surprisingly yet still appreciate by many, even if eaten raw. The clearly endless cases of bad science in equally clearly bad open access journals provide a space where unethical and bad scientists can express their real and/or fake results. The result of this: it is increasingly becoming difficult to perceive what is correct and what is not. This is a serious problem because as scientists we look towards the literature as a beacon of reference, and we often use results and conclusions of other studies upon which to build our own new experiments. If a certain percentage of what we use as references is bogus, then that also makes some of what we research and publish bogus. Bogus today will be 10-fold bogus in 3 years from now. Even if John Bohannon’s experiment showed us what we already knew (i.e., about the rot in the predatory open access movement), does that make his actions less unethical? I think not. Dr. Bohannon should also suffer consequences for his actions because he has, in effect, potentially further corrupted the literature. Of the 157 bogus papers that were accepted, how many will eventually make it into the literature (maybe we will only understand the outcome in 2014). Even one finally published paper would make his experiment dangerous. This is a key question moving forward. What makes Mr. Bohannon’s actions any more ethical that those of true fraudulent or fake scientists? Just because he is from Science doesn’t make his actions right. And just because he has a fancy web-sie doesn’t make his ethics superior to others’. The truth of the matter is that he relied upon a blog (Beall’s list) that does not in fact give absolutely any quantification of the level of fraud committed by the publisher’s on its list. That makes the Bohannon paper fundamentally flawed. When top-level publishers like Science published by the AAAS (American Association for the Advancement of Science) allow flawed papers based on flawed blogs (i.e., Beall’s) to be published, this is serious. A blog is not a scientific center and the Beall blog list of predatory publishers is totally flawed, so much so that Jeff Beall himself has indicated “Potential, possible, or probable predatory scholarly open-access publishers”. The Beall “advisory board” of four is anonymous, which is a highly unscholarly thing to do. There is an appeals page (http://scholarlyoa.com/appeals/) that states “Publishers can appeal the inclusion of their journal or publisher on this blog’s lists. In the email, please state the reasons why you believe your publisher or journal should not be included. The email will be forwarded to a four-member advisory board. Appeals are limited to one every 60 days.” Who are the four on this powerful panel? How much justice can be achieved with one appeal every 60 days? Basically only 6 appeals a year. This clearly makes the Beall blog unscholarly. Incidentally, Science has a 2012 Impact Factor of 31.027. How has Dr. Bohannon benefitted, financially, or in any other way, based on a paper that was based essentially on fraudulent submissions and also on a flawed assumption (i.e., that the Beall list is in fact factually correct and scholarly)? Imagine each and every one of us had to submit 300 fake papers to what we perceived to be predatory publishers? Then, we would, after wasting their valuable time and resources, send them a letter asking the paper to be retracted because we used a fake name and a false data set, simply because we wanted to conduct an experiment to test the system. This whole experiment, the more I think about it, by Dr. Bohannon is so fundamentally wrong, at least from an ethical standpoint. I would be curious why publishers haven’t taken legal action yet for this guy wasting their time and human resources. If I were a publisher who had been scammed by Dr. Bohannon, I would certainly be, at minimum, irritated with his show. Ironically, Dr. Bohannon, in his 2011 piece on the Science Hall of Fame, claimed, in the section 4. Embrace controversy “This is not to say that you should be evil.” How can submission of 300 or more bogus papers not be considered malicious. After all, he also made fake submissions to Elsevier and Wolters Kluwer, some of the biggest names in science publishing. Just because he conducted a fun experiment, does that make it right?

      1. PS: Bohannon has the responsibility of providing a FULL list of the bogus names he used, the bogus affiliations and the full bogus paper content of al 300+ papers. I am absolutely surprised that this meta-data was not made available in Word or PDF format by Science. Where is the justice and where is the transparency? Furthermore, it is imperative that Dr. Bohannon state clearly that he did not collaborte in any way with Jeff Beall himself, otherwise this could constitute a serious conflict of interest. His paper offers no acknowledgements, which implies that every aspect of this study, methodology and interpretation was the work of his own hands. This also needs to be formally confirmed. Finally, he also needs to confirm that he created several hundred fake e-mail addresses in order to submit as many equally fake papers. On this blog I have often seen bloggers and even publishers be extremely critical about scinetists creating fake e-mail addresses and fake names for submissions. Where is the outrage now? It astounds me that this paper and study are actually being touted as being ethical, correct and pillar of science writing and publshing. The background of this paper is far too cluttered with unnowns that the author shuld respond to.

        1. The supplementary webpage that goes with the article – http://scicomm.scimagdev.org/- has all the email correspondence (and so also the email addresses and names he used), so this is actually available. It’s not the best format though – one has to click on every data point to bring up the email list.

          1. Booker, the page you provide with the “evidence” is giving a 404 error. Can you or have you been able to donload the full list. In fact, I am going to make a formal request from Dr. Bohannon right now to provide the full texts of all the false papers he submitted, the names, the e-mails and the fake institutional addresses. If you ask me, unless he provides all of this evidence for his study, this Science paper should be retracted.

          2. Thanks for that Steven, I clicked on a supplementary data link earlier and went straight to the interactive page instead, but it’s far easier to look at with the link you’ve given.

            Re: JATdS, I placed that link between two dashes, and it seems the dash at the end of the link has ended up in the url, but if you delete that it should work.

      2. PLOS ONE is “chocolate mousse”, “accepting flaws whenever red flags are raised”?

        Puh-lease. It’s very difficult to get a paper actually fully rejected from PLOS ONE, even for valid scientific reasons. All they want is your money, so most of the time they just keep sending it out for review and revision, time and again.

        In addition, the quality of writing in PLOS ONE is generally quite abysmal. The only purpose PLOS one serves is as “The Big Journal of Negative Results”. And we’d be better served by just making a free, open-access database for that.

        1. Bobo2, I don’t totally disagree with you. However, can one compare papers published by PLoS One with Wudpecker Journals (http://wudpeckerresearchjournals.org/) on Beall’s list? This is all I am saying. While intellectual snobs ruthlessly attack fairly valid OA publishers like PLoS One for TRYING to keep things real, validated and peer reviewed, at the other extreme of the scale there is total publishing fraud taking place. I guess what I’m saying is things need to be observed in a relative sense AND in an asbolute sense. Now criticizing the excessive fees charged by PLoS One, I may be willing to agree with.

        2. I must say that I have many Brazilian colleagues, who I think are good biochemistry scientists, who have published in Plos One several times, and they are unable to understand these issues appointed. They got good reviews, and they never paid one cent! Maybe there are area differences?

        3. PLOS ONE’s rejection rate is about 40%. Perhaps significantly, Nature’s Scientific Reports has about the same rejection rate. The same is true of yet another publisher (sadly I forgot who) that, like those two, conducts peer-review on the basis of scientific soundness rather than subjective guesses at likely impact. The tentative conclusion is that about 60% of accepted papers are sound, and the rest are fundamentally flawed.

          1. I am EBM for SREP. Rejection rate is extremely low (in my experience VERY low) because the rules clearsly state that technical soundness, not novelty is the only criterium. However, both the publishers and, I suspect, the EBMs are not allowing any plagiarism.

        4. Bobo2: “In addition, the quality of writing in PLOS ONE is generally quite abysmal.”

          Do you have any data to back that assertion up, or is this just your subjective impression?

        5. Bobo2,

          As has been pointed out, the acceptance rate for PLoS One (at least as of April of 2012) was 68-69%. I have been an AE for PLoS One for about 4 or 5 years, and in my experience that all of the reviewers for the papers I have been an AE on have taken their role extremely seriously and have provided thorough and critical reviews. If the Science was done poorly, I have rejected the paper. Indeed, I think I am at about the 50% mark on acceptance.

        6. Definetely not what I saw in the last 3 years reviewing papers for them (Immunology/Pathology field): I refused 50% from what I’ve got, a total of 5 out of 10 papers. The Editors followed my recomendations. Why al those angry, pointless comments on PloS One? It’s time to waste our time in a decent effort to publish clean, relevant and ethical stuff coming from our respective institutes.

  4. Ivan, what a sensational effort: “By the time Science went to press, 157 of the journals had accepted the [fake] paper and 98 had rejected it”! The appalling state of quality control in published science is almost unbelievable.

    After Bohannon’s wonderfully revealing fraud, my concerns about the University of Sydney’s extraordinarily faulty “Australian Paradox” paper – a paper self-published in one of ADPI’s pay-as-you-publish e-journals, with the lead author acting as “Guest Editor” – seem almost tame by comparison: http://www.australianparadox.com/pdf/GraphicEvidence.pdf

  5. So what’s the lesson here? Clearly, the number of Journals (both open-access or traditional) that can offer a stringent editorial quality check, followed by a rigorous peer-review, is limited. The good news is that most scientists (students, postdocs, and faculty) who feel strongly about their work will know which Journal(s) to consider when it’s time for submission. To me, the tragedy is not the high percentage of “Journals” which publish fake papers without any review, but rather, the large number of scientists who feel compelled to submit their work to such Journals to begin with.

  6. The lesson is simple, no matter who publishes: Question authority! The real crime is that bad studies are bandied about like gospel by people who have not critically analyzed the work. As Mark Twain wrote ““In religion and politics people’s beliefs and convictions are in almost every case gotten at second hand, and without examination, from authorities who have not themselves examined the questions at issue but have taken them at second hand from others.” The same can be said about science.

  7. It is unfortunate that the phrase “open-access journal” is used as the sole descriptor for a certain type of lousy journal. Some of the best journals are open-access. Open access itself is a good thing.

    1. Frank, I fully agree. There are bad scientific journals an there are bad scientists. As equally as there are good scientists and good journals. Just because it is open access does not automatically make it bad. I agree that open access is fundamentally good. The problem is that the OA movement is being abused and predated upon by the fraudsters and is being used as a way to make easy money in financially testing times. There are many valid OA journals wth good OA papers. OA science speeds up research by providing unlimited access to information. And I believe that the Bohannon study proves that. BUT, the way in which Bohannon proved this was fundamentally wrong, and unethical. As I state above, he basically tricked, through fraud and lies, 157 journals. He predated upon their weakness and abused the system. Not because it should have been abused, but because he thought he had some special powers to abuse the system, just because he works for Science. Did the AAAS approve his study and methodology before he had his paper published? Did the AAAS provide a stamp of approval to acts of fraud and trickery and unethical behaviour? Is this acceptable to COPE and to the ICMJE? Dr. Bohannon should now suffer the consequences of his little experiment. There should be much more outrage about this study.

      1. Bringing a gun through airport security in order to hijack a plane is quite different from doing it to demonstrate that airport security is flawed. No outrage about the latter.

        1. The “terrorized state” mentality has no room in science or in science publishing. A gun is a gun. So, even to bring in a gun that is unloaded, it is ultimately still a gun. Proving a point by doing something illegal or by using unethical methods might sound like fun to some, or a good way to prove a point, but in fact it only makes the situation worse. It breeds fear and distrust, just like the “harmless” gun you refer to. Does the AAAS really believe that this paper is so harmless and that the methods used are ethical?

          1. According to the Science article, after the papers were accepted the bogus authors retracted them prior to publication. I hope that helps.

        2. In this case it was a fake gun in the analogy. And in reality his papers can bring no harm to scientists, and only to publishers, as those who understand the field will immediately reckognize the problems. I think his methods were great for a flawless demonstration of the weaknesses in the field, and I can think of no other more effective approach. Would you complain less if COPE had made this same study? If yes, why?

    2. Agreed: Open access is not the only variable. Michael Eisen has a deliciously snarky response to this story:

      In 2011, after having read several really bad papers in the journal Science, I decided to explore just how slipshod their peer-review process is. I knew that their business depends on publishing “sexy” papers. So I created a manuscript that claimed something extraordinary – that I’d discovered a bacteria that uses arsenic in its DNA instead of phosphorous….

      1. In general, when a scientist submits a paper to a journal, especially with more established publishers, there is usually a guarantee that is given, explicit or implicit, that ethical values are met, including that the submission does not include false authorship, false data, and/or false institutions. Dr. Bohannon, before he published his “sting operation” paper, broke all ethical rules upon submission and failed to honour the requirements by those publishers. He lied about his identity, he lied about his institutes, he used fake e-mails, he lied about the data, he lied about the factual content, and he purposefully and maliciously tricked publishers, even innocent and honest ones (the ones who actually picked up on his trickery). This means that he violated all initial guarantees of honesty and ethics. What makes him so special that he thinks that he is above ethics and publishers’ rules to do this? As I say, there should be outrage here. Real lives being affected, both on the side of the publishers and of the scientists. Dr. Bohannon has endangered the industry even more, I believe with this act of trickery. And what does the AAAS feel about this?

      2. The arsenic story is a gold standard of its kind, but here are other extraordinary reports. Just look for “muscular dystrophy cured with a single injection”, published as “Rescue of Dystrophic Muscle Through U7 snRNA-Mediated Exon Skipping”, Science 3 December 2004: Vol. 306. pp. 1796-1799. This is not science but daydreaming.

    1. Wow, I’d not heard of that paper. Fun abstract for all, since my institution doesn’t have this journal:
      —————

      Abstract
      A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.
      The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.
      With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.
      ————————
      Makes you wonder, if peer review panels were larger, would they be more likely to 50/50 and deadlock over papers, given that a resubmitted paper has a high chance of being rejected if it goes through review again?

      1. That should be shocking. Is peer-review arbitrary? That would be about as bad as no peer review at all.
        Many proposal writers I believe have made similar experiences. Not only do reviewer opinions for the same proposal often differ dramatically, the evaluation of the same proposal when re-submitted the next year is often totally different.

        1. Everytime people experiment with peer review in journals, the results are quite “bad” for whoever has a strong belief in the core value of peer review. See for example the book by Chubin & Hackett :
          http://books.google.fr/books?hl=fr&lr=&id=Xfsh6D29WoIC&oi=fnd&pg=PR11&dq=chubin+peer+review&ots=gky1pW9H0m&sig=Fx4wrGcpPaAl017-8QKSaZYd8Dg&redir_esc=y#v=onepage&q=chubin%20peer%20review&f=false

          The problem is the alternatives : Plos One “soundness” is clearly one, but then criticized for being too lax. Others are working papers open archives, à la REPEC, where in the end only usage decides which papers are “good” or “exceptionnal” rather than 2 to 5 folks.

  8. I’m the publisher of an open access journal which did reject the fake paper. I won’t state here the name of the journal because I don’t intend my comment to be taken as self-promotion.

    To provide a different perspective on Bohannon’s findings, I think it’s worth observing that the open access publishing industry–by which I mean the publishing firms and their journals–is in a state of rapid development. Look at any industry in a similar stage of development and you’ll probably see much the same thing: a lot of immature new players moving in who will probably not survive as the industry matures, and products/services that are not yet perfectly refined. This isn’t meant to suggest that poor peer review is excusable, but nor should it be taken as a sign of an existential crisis for OA.

    Another point is that readers and authors need better information on how to identify good journals from bad. I would suggest that these questions could be a good starting point:

    1. Is the publisher a member of OASPA and other industry bodies concerned with editorial ethics such as COPE, ICMJE?

    2. Does the journal or publisher have clearly stated editorial policies?

    3. Does the journal have an editor in chief and editorial board whose affiliations are stated, and do those affiliations demonstrate significant international standing?

    4. The acid test: do the papers published in the journal reflect the standards of quality claimed by the journal?

    Readers will note that I don’t make mention of impact factors or pubmed indexing here. Journals can be very good without having either, and particularly in the case of impact factors it can be very difficult even for high quality journals to obtain them.

    1. These are all really good points. One aspect that I think gets overlooked in the discussion sometimes is how the rapid growth of the publishing industry puts added pressure on a given Journal to find and retain a pool of qualified reviewers. I am reviewing for an OA Journal and I notice that many submitted manuscripts take 1-2 months before they are matched up with reviewers. I am curious about your experience.

      1. Finding a sufficient quantity of qualified reviewers is certainly a challenge. In precis I think the answer lies in 1) having clear, robust, and well-developed peer review guidelines, 2) Reminding reviewers of these guidelines each time they’re invited to perform a review, and 3) having an adequate source of reviewers: a well-curated editorial board is the cornerstone. Other readers may be able to elaborate on these or add additional points.

        1. Yes, I have experienced this already. Though journals promise 2 weeks review time – it is not possible because reviewers are hard to find. One to two months is not bad at all.

    2. Isn’t part of the problem simply too many manuscripts searching for a publication outlet, and that is caused by unrealistic expectations of scientific productivity, coupled with inappropriate metrics? If one quality paper per year were regarded as standard for the average productive researcher, we could do away with the madness of bean-counting and judge scholars by quality, not quantity.

  9. That really is quite a strange article. I think his whole presumption that open access journals are poor quality really ruins the article and the focus of the ‘experiment’. You can see how it tarnishes his thinking so that he isn’t approaching the subject with a proper analytical mindset. He’s saying ‘open access journals are crap, so I’m going to set out to prove it’, but I think the truth would be much closer to the comment further down in the article that many non-open access journals would probably have accepted it as well, and without that comparison in there, the whole thing says more about the process of science in general than it does specifically about open access journals in my mind. And besides, even ‘traditional’ publishers charge page fees, sometimes quite substantial, so the message he’s trying to convey that this is all about open access journals just taking the money doesn’t wash either.

    In all honesty, it’s an interesting experiment but I don’t think the results are as ground-breaking as they’ve been made out to be. Everyone knows the peer review process is inconsistent and in real world terms can often be a lottery. I would have expected a slightly better write up from Science too – a table with a clearer demarcation of numbers and percentages published – there’s so many numbers in the article but it can be hard to figure out from what denominator he’s referring to at various points. And then there’s this “26 rejections” sentence tacked on in one place:

    “and among the India-based journals in my sample, 64 accepted the fatally flawed papers and only 15 rejected it. 26 rejections.”

    I wasn’t too sure of the tone of his article looking down on the developing world – when you look at the supplementary interactive website (http://scicomm.scimagdev.org/) there’s still quite a lot of acceptances in the UK, US, Australia, Italy, New Zealand (my home country, hmm), which isn’t really reflected in the write-up.

    Ironically, I think his write up for Science could have used some peer review!

  10. There needs to be some consequence for Bohannon here. I assume he did not actually pay the publication fees for the accepted papers, is that business fraud? Can the publishers sue him? If he did pay, is that $200K from the AAAS budget? Good grief.

    I’m quite skeptical about using unethical and possible illegal means to expose unethical conduct by others.

    1. StrongDreams, Below the e-mail I just sent the AAAS and Dr. Bohannon (slightly edited). I think it is important to leave a public record here so that the AAAS cannot say that it did not receive the e-mail.

      From:
      To: “[email protected]
      Sent: Friday, October 4, 2013 11:39 AM
      Subject: Science John Bohannon query, claims and request

      Dear Dr. John Bohannon and the Science Editors (AAAS),
      CC (next, separate e-mail): your “fraud assistants” and/or select blogs.

      REF: http://www.sciencemag.org/content/342/6154/60.full (Science “Who’s afraid of peer review”, probably a play on “Who’s afraid of the big bad wolf”?

      Thank you for your efforts to confirm what was already quite well known about the predatory open access journals. Therefore, you neither invented the chicken, nor the egg.

      However, I and others have some concerns about the way in which your study was conducted, which potentially presents several fundamental flaws. Allow me to explain these in more detail, and as succinctly as possible, in this e-mail.

      I have now been able to download the 400+ Mb of files that supplement your article, including the fake papers and the fake e-mails.

      At the outset, the Science web-page states: “Read the full report here: http://www.sciencemag.org/content/342/6154/60.full“. However, that link leads to the following message: “Not Found. Content not found.” In fact, several links on the Science page are dead.

      Allow me to list my 13 queries and/or criticisms for easier understanding of my concerns:

      1. In those annexes/supplements, the Excel file of the submission data lists 315 papers, not 304 as you report in the paper. Can you please explain this discrepancy between these values. Your paper indicates that you created one “spoof paper”, but in fact you created several hundred fake papers.

      2. In the actual fake paper file names and also in the e-mails folder, several numbers are missing, for example: 4, 5, 6, 8, 12, etc. Were these other fake papers that were submitted to other journals or publishers? If so, why has this data been deleted? If not, then why the gaps? Please explain these “missing numbers”.

      3. Could you indicate exactly which of those 304 (or 315) papers are in fact published and in circulation or in processing. Kindly indicate the publisher and the web-site.

      4. Could you indicate if in fact you made any payment of any APFs.

      5. In your “sting paper” you claim “I have conferred with a small group of scientists who care deeply about open access.” On the web-site it states:
      “The fake papers used for this project were created with help from Nicholas Douris (1), Shangyu Hong (1), ffolliott Martin Fisher (1), Olga Dudchenko (2,3,4), Elena Stamenova, Jim Robinson, Miriam Huntley, Benjamin MacDonald Schmidt, Adrian Sanborn, Ivan Bochkov, and Erez Lieberman Aiden.
      1. Harvard Medical School, Boston, Massachusetts
      2. School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts
      3. Baylor College of Medicine, Houston, Texas
      4. Rice University, Houston, Texas
      5. Broad Institute of MIT and Harvard, Cambridge, Massachusetts
      6. College of Social Science and Humanities, Northeastern University, Boston, Massachusetts”

      Are these 11 individuals the “small group of scientists” you are referring to? If so, why is no institutional affiliation given for the last 7 individuals? Who are they, do they have any affiliation with publishers at all, serve on any editor boards or have any actual, potential or hidden conflicts of interest? Incidentally, is the first name of Fischer (ffolliott) correct, or is that also, perhaps, a dud name?

      7. It is not clear exactly how much and with what aspects these scientists contributed to your study and if in fact they should have been co-authors. Strangely, an official Acknowledgements does not appear as part of your official paper, but only as a small section of the online page. Why is that? You appear to take full credit for all the work, including all of the spoof submissions, so confirmation is of course, required that you handled all background operations single-handedly. Alternatively, indicate more precisely what

      8. Kindly indicate, in detail, the participation, if any, of Jeff Beall, whose informal blog entry lists with no quantitative criteria, have been used as the basis of the selection of journals to target. Please also confirm that none of your “assistants” are associated in any way with Jeff Beall or the DOAJ.

      9. Indicate exactly how you sampled DOAJ journals? There are several thousand journals, so how were topics and titles selected? Based on what qualitative and quantitative criteria? These details are also not outlined in your Science paper.

      10. “These data sets were curated by John Bohannon and Kelly Servick.” In plain English, what does this actually mean and who exactly is Kelly Servick and how is she involved exactly with this study and why is she not a co-author?

      11. Finally, when a scientist (or anyone for that matter) submits a paper to a scientific journal, independent of whether that journal is in fact “predatory” or not, they make certain legal and ethical guarantees that what they are submitting is true. Can one thus say that in fact you and your colleagues openly lied to hundreds of publishers and journals (and their editors) upon submission and that in fact you faked 100% of your submissions, thus failing the ethical and legal guarantees that most publishers required? In other words, would you not say that you in fact acted unethically before you exposed the lack of quality control by the publishers?

      12. In the same vein as question 11, did the AAAS and Science specifically provide you with permission to act unethically and to submit hundreds of fraudulent journals? If yes, why would not this be considered a smear campaign by Science to fend off competition by the open access publishers so as to conserve its own traditional print journal profits? If no, then why would the AAAS, an apparent pillar of ethics, and your employer, not have screened your publishing protocols?

      13. Considering that you in fact committed a serious ethical faux pas (in fact at least 304 of them), considering that you have tricked and harassed the publishers, and weighing in the fact that your presumption that the Beall list is factually correct, should your own Science paper not be retracted from the scientific literature because it goes against the grain of many of the core values that science publishing is supposed to be based on?

      I look forward to an open and frank response by you and by Science (AAAS) to these concerns, which affect all of us scientists in the real publishing world.

      I can with confidence say that I would never cross that red line, even if I had serious discords with the ethics, quality or validity of the publishers or journals. If each and every one of us were to submit 300 fake papers into the publishing sea, after 3-5 years, who would know what is real and what is fake?

      Sincerely,

      1. ” If each and every one of us were to submit 300 fake papers into the publishing sea, after 3-5 years, who would know what is real and what is fake?”

        By using our scientific knowledge to identify papers with blatant methodological errors? Since that is what was wrong with the fake papers submitted.

        There were concerns that there are many open access journals behaving in a predatory manner, and this has shown that yes indeed many do publish any garbage they get their hands on.

        Established journals are not perfect but at least they have some nominal peer review system that catches some of the junk trying to get into the literature.

        1. Relax, Dr. Bohannon didn’t have to submit several hundred fraudulent papers to prove this fact. The correct way to prove the lack of academic level of a journal is through post-publication peer review. I can assure you that several main-stream publishers have true predatory qualities, starting off with their pricing.

          1. Thank you for the reply, although I will have to strongly disagree with the remainder of your comment. There is a serious problem where dubious science is published, there is a backlash from credible scientists in the field and many post-publication comments are published but nothing changes. There is no retraction, and often no correction. So the literature gains another faulty paper and then even more literature generated and time wasted by scientists trying to correct the initial junk paper. So I don’t think that ‘post-publication peer review’ is the magic wand you are making it out to be. So in light of that, I think what Dr. Bohannon did was quite right. He showed that many open access journals are quite unethical and not interested (or capable) or promoting scientific truth.

      2. Unfortunately and unintentionally our journal had also accepted that spoof paper. As an Editor-in-Chief of journal, “Indo Global Journal of Pharmaceutical Sciences” , I am taking total responsibility of this mistake. On behalf of editorial staff, I wish to seek apologies for such unethical acceptance.

        Unlike others, even we had responded to the last mail received from Dr. Dosema (fake name of Dr. John) on 24th August, 2013. I am herewith mentioning a slight edited version:

        “Dear Dr. Dosema, I sincerely wish you find this mail in good health. Dr. Dosema, on behalf of editorial team of Indo Global Journal of Pharmaceutical Sciences, I would like to sincerely thank you for observing your experimental flaw. Science need true scientists like you Dr. Dosema. We always encourage scientists like you Dr. Dosema. Sincerely looking forward to receive your corrected manuscript soon.”

        Though I am quite embarrassed by this situation, but I thought to came out and accept it rather than hiding behind the curtain. As our journal is in a very young stage, so we are constantly trying hard to improve its quality and in some aspects we are even successful to do so. But to err is human. And I believe that if something wrong has been happened from your side, then its better to face it at the earliest before the situation gets out of control.

        But I am happy to know that even scholar readers and bloggers are very much concerned about the unethical act of Dr. John and Science/AAAS(as an act of employee is responsibility of his/her employer). He violated all rules of submission and certainly not the authorized person to do such sting operation. If this is accepted, then we will see in coming years lot many sting operations from so many unauthorized person in this illegal manner and soon our scholarly database will be filled by the fake articles.

        Though already much have been discussed about his act by not using any control as well as his direct attack on the developing countries and OA publishing portal and also his exclusion of traditional subscription based journal with an excuse of having shortage in time. In addition to above discussion significantly made by fellow members, I would like to ask:

        As a fundamentals of any scientific research, single set of experiment is not valid to claim any finding until and unless it will be triplicated and generated SEM and Significance plot. It is always recommended to take small data set and few experimental protocols, but the outcome/result should be reproducible. As I read his article, he mentioned that he submitted one article in each journal, and purely on the basis of that single article, he with strong support of Science/AAAS blamed those journal’s peer reviewing process. How can he lead to any conclusion with single experiment and how can Science support such study. Have he with another fake name submitted another article in all these journals and found the same results?

        On negative side, all this actually now forced us to critically checked all the authors and their institutions, and minimize our support to young researcher who publishes their paper first time. On positive side, it realize us the need to improve our systemic improvement in editorial process and make our back bone much stronger.

        1. I thinik the main issue with the papers was not with the author’s names and affiliations, however that the content was absurd and unoriginal. In principle anyone can publish a scientific paper, even under a fake name, and the community wants the results to be sound and precise. In a bad analogy, but relevant, great books were published under aliases. The scientific content is what peer-review and editor handling should be investing time on… I think these papers aimed at showing where editors and reviewers were either lacking or maybe concerned with something el$e.

          1. Double-blind trials are a way to reduce bias in clinical studies, and we would not think of approving a medicine without them being tested in that way.
            Double blind peer review, where those reviewing a paper do not know who the authors are, and the authors don’t know who the reviewers are, would greatly reduce bias in science publications, especially in the high impact factor journals. Publication of all papers should be based on their scientific content, rather than on who the authors are, where they come from, and their relationships with the editors.

        2. Dr Singla: I just went through the first number of your fine journal. The first article may be original, or at least I could not find another one just based on the title. The second (Biological Screening of Triherbal Formulation on Chemically Induced Hepatocellular Carcinoma, (http://iglobaljournal.com/wp-content/uploads/2013/06/3.-Irfan-Aziz-et-al-IGJPS-2013.pdf) seems to be lifted from another journal (the date of acceptance was the same, which seemed unusual): research journal of pharmaceutical, biological and chemical science (http://www.rjpbcs.com/pdf/2011_2%284%29/%5B124%5D.pdf), which is also on JB list.

      3. One should also ask whether the paper followed Science magazine’s own ethical standards for performing experiments on human subjects. After all, this was nothing but a behavioral study, and I somehow doubt that they got consent from the participants …

        1. Not sure if this qualifies human experimentation any more than a test with any other company, such as McDonald’s. I think the editors represent the enterprise in this case, and not their personal selves.

          1. 1) Not only editors, but also referees contributing their time for free were involved. I’d be pretty much pissed off if I learned that I just wasted time reviewing a non-serious paper that is part of some experiment. I don’t like working as a lab monkey without even knowing it.

            2) Product tests (esp. when the product is acutally a service) are usually not reported in peer-reviewed journals who subjected themselves to certain ethical standards.

          2. absoluately, i feel the same. If I would have reviewed this manuscript, it is really a waste of time. why the author and concerned authorities are not responding to these comments.

          3. I think the comment by Toby White, 10:24h, below, settles this issue. It was published in journalism section, exactly like any other test with companies. Also I think the editors and reviewers did not really invest any time and possibly would not reveal themselves if the described protocol for preparing the mss was truly met.

      4. JATdS

        We still don’t know which data in which papers are real and which are fake due to fake data in papers from scientists at esteemed institutions.

        The whole point of investigations is to find the truth – and this investigation found the truth.

        I am sorry some find it uncomfortable.

        I think we should be more concerned with finding science-fraud than anything else and show due diligence when reporting it.

        The investigation does highlight how easy it is to publish without due diligence of reviewers and publishers. The end justifies the means.

        1. “The end justifies the means” – without commenting on that phrases merits (or lack of) in this particular case, I would have to hazard a guess that that is the exact phrase going through the minds of every person who splices a gel, deletes a band or gets rid of an inconvenient data point…

      5. You will probably find a more receptive audience for your complaints if you would trim them down to the most relevant. In particular, Google would have answered question number 5 for you easily. The fact that you didn’t try this first makes it all too easy to dismiss the rest of your queries, because it suggests that you don’t really care about the answers.

        1. Incidentally, most of those “fraud assistants” all are undergrads in the same laboratory at Harvard. A total abuse of the powerful academic elite who think that their ethics are superior to those of the so-called predatory publishers. The predatory publishers are wrong about many aspects. But so is the ethics of Science, the AAAS and Dr. Bohannon. I am not taking sides, I am giving a balanced critique.

  11. It may be important to note that the Bohannon piece was published in the news section. It’s journalism, not science. We can legitimately complain that it’s getting harder and harder to tell the difference in the pages of Science. But there is a distinction between the two — even if the editors at Science seem determined to smear the line out of existence.

    It makes no sense to hold journalism to the same standards as scientific research. The Bohannon article is good journalism. It is well-written, thought-provoking and perceptive. It is also terrible science — uncontrolled, biased, over-interpreted and based on deliberate deception — but so what? That describes a lot of excellent investigative journalism. The bigger risk is that we confuse the two — suppressing journalism we disagree with because it is unscientific and promoting research we agree with because it makes a good story.

    1. I agree with you – which makes this story analogous to Elsevier’s fake journals: making something look like science, full knowing it isn’t.
      The message here is: don’t believe anything in the news sections of Science, Nature etc.
      The message is also “do not hold us to the same standards we are holding you to!”.or “do as I say not as I do”.
      Let’s see for how long this will remain a viable strategy for the GlamMagz…

    2. I disagree with the terms “good” and “journalism” here. In my opinion, this is nothing more than highfalutin’ “Gotcha Journalism,” and is more appropriate for a Murdoch publication.

      AAAS has a vested interest in the outcome of this report, and has obviously committed resources to disseminating the story (I heard it three times on drive time radio this morning). I’ve been on the edge about letting my AAAS membership lapse, since I get Science via my work library and have no need to waste carbon on the print version. This indirect self aggrandizement brings me one step closer to spending that money elsewhere.

      At least my other professional association publishes rigorously reviewed science in “Cancer Discovery.”

      1. BTW, my “snark” alert was lost on posting. “Cancer Discovery.” Colorful! Exciting! Nepotistic! Now with an IF over 10! Is this the decade where Science Publishing crosses the Rubicon?

  12. It amazes me that anyone in the scientific establishment believes that the problem can be blamed on mode of publication, as opposed to the procedural and structural flaws of PR. There is certainly a compounding effect from the increasing number of papers and not enough peer reviewers with the necessary expertise, but this doesn’t have anything to do with open access. Years ago, Richard Smith, former editor of BMJ, conducted a study on peer review (inserting deliberate errors into real submissions) and has written eloquently (even a book) on its inherent problems including bias, slovenliness, etc. Smith does not pull any punches in his paper “In Search of an Optimal Peer Review System” (disclosure: I commissioned the paper):

    “We struggle to find convincing evidence of its benefit, but we know that it is slow, expensive, largely a lottery, poor at detecting error, ineffective at diagnosing fraud, biased, and prone to abuse.[1][2][3] Sadly we also know—from hundreds of systematic reviews of different subjects and from studies of the methodological and statistical standards of published papers—that most of what appears in peer-reviewed journals is scientifically weak.[4][5][6][7]”

    http://www.jopm.org/opinion/2009/10/21/in-search-of-an-optimal-peer-review-system/

    These inherent problems with peer review are what give life and breath to Retraction Watch!

  13. Speaking as someone outside the academic community, who will probably never submit a paper for publication, but relies on published research to form opinions, the disturbing aspect of these findings is how easy it is/would be for those with agendas to publish bogus papers to support said agendas.

    Such papers appear as scholarly works, but actually push some political policy. Ineffective or non-existent peer review allows them to be published. After that, the media publicizes the “results” supported by the supposed authority of the journal. The more spectacular or in agreement with the prejudices of the day the paper is, the happier the media is and the further they will spread the bogus results.

    Thus robbing the population of their right to self-determination, by feeding them false information.

    Experts in the field may have some inkling of which journals are reliable and which less so, but the public has no way of knowing and the media does not appear to care, as long as they get a sensation to publish.

    1. Thanks for making that point. I realize almost nobody so far mentioned potentially the worst effect of poor peer-review: the immense damage it does to the credibility of science. If anything can go through peer-review, and some scientists themselves exclaim that peer-review is nothing but a “joke” (http://www.michaeleisen.org/blog/?p=1439), what can we say to those who distrust climate science and evolution? Evidence of flawed peer review and flawed science being published needs to go under the skin of every editor, every reviewer, every scholar, and also every department head and administrative bean counter pushing scholars for more more more publications no matter how no matter at what cost. We can’t just go on shrugging this off. We have to do better.

      1. What we say to the climate-change and evolution deniers is, I hope, the same thing we say to fellow scientists: that a study is reliable or not in accordance with how its community receives and builds on it, not according to a one-off single-bit-wide stamp. I won’t go as far as Eisen in saying that peer-review itself is a joke. But the idea that a single round of pre-publication review is anything like a reliable stamp of quality is a joke, and not a very funny one. If you doubt me, see Richard Smith’s (2009) In Search Of an Optimal Peer Review System.

        1. I don’t disagree with that but still, defenders of science use the word peer-review a lot. We have the peer-reviewed literature, they don’t. Shall we now say that peer-review isn’t really that important? I would rather prefer we’d treat it as ever more important and get real about fixing the known problems. Sure we also need to explain that peer-review is only one step in the validation process, that scientists aren’t always right but the power of the scientific method lies in its ability to check, replicate, and self-correct. But that’s no reason to give up on peer-review.

          1. I admit I’ve often talked about the PeerReviewedLiterature as though it’s a Thing. I think I’ve mostly done that out of habit, though, and because I’ve heard my elders doing it. When I stop and look at what that really means — and at my own experiences of non-open pre-publication peer-review — I do find it much more akin to a hazing ritual than anything reliably interpretable.

            What review is valuable? Open review. See for example the review history on one of my recent papers. Any non-specialist who wants to judge whether my work there is serious can at least look at what the reviewers did, and see that it was real, engaged work, which made a real difference to the published version. But for most peer-reviewed papers, all we know is the single fact “this passed peer-review”, with no idea of what that entailed.

        2. Thanks for that link, I’ve been meaning to read Richard Smith’s “The Trouble with Medical Journals” for some time now, there’s a whole chapter on peer review.

    2. Jeff, you are spot on with your observation that “…the disturbing aspect of these findings is how easy it is/would be for those with agendas to publish bogus papers to support said agendas. Such papers appear as scholarly works, but actually push some political policy. Ineffective or non-existent peer review allows them to be published. After that, the media publicizes the “results” supported by the supposed authority of the journal”.

      Readers, here’s a real-life example: The University of Sydney operates a business that exists in part to charge food companies up to $6000 a pop to stamp particular brands of sugar and sugary products as Healthy. The high-profile and highly influential scientists who operate that business wrote and formally published in 2011 a paper that exonerated sugar as a menace to public health. Unfortunately, the Australian Paradox paper is extraordinarily faulty and its main conclusion – “an inverse relationship” between sugar consumption and obesity – is false.

      The paper never would have been published in a journal with competent quality control: http://www.australianparadox.com/pdf/GraphicEvidence.pdf

      Nevertheless, the paper was used by the University of Sydney, and the sugary food and beverage industries, to try to stop the Australian government from toughening dietary advice against sugar: http://www.smh.com.au/national/health/research-causes-stir-over-sugars-role-in-obesity-20120330-1w3e5.html ; http://www.theaustralian.com.au/news/health-science/a-spoonful-of-sugar-is-not-so-bad/story-e6frg8y6-1226090126776

      In the real world, this all matters because modern rates of sugar consumption are a key driver of global obesity and type 2
      diabetes: http://care.diabetesjournals.org/content/33/11/2477.full.pdf+html

      Yes, John Bohannon has done the world a favour by documenting the fact that nonsense-based papers are easily published without competent peer review by anyone who feels the need. Accordingly, non-scientists cannot simply trust published science, and it would be understandable if taxpayers increasingly choose to stop funding scientific research.

      How do we fix this mess? In my opinion, we need to start retracting obviously faulty papers – and perhaps some underperforming-but-high-profile scientists’ heads need to roll – to provide clearer incentives for “science” to clean up its act.

      1. “How do we fix this mess?”
        Simple: The hard-earned taxpayers’ money obtained by fraud (i.e. on the basis of fraudulent publications) should be RETURNED with the corresponding interest.
        It is not so difficult. See the case of Milena Penkowa and the University of Copenhagen, where, according to Marco (October 6 @ 3:08 pm), “The university paid back around 2.1 million DKK (about 380,000 dollar), Penkowa herself returned 250,000 DKK.”
        Brilliant example for other institutions/countries to follow!
        Ivan, may be it will be good idea if RW starts a list of Doing_the_Right_Thing where the institutions/academics who returned the grants obtained by deception are listed, starting with Milena and Uni of Copenhagen?

  14. WHERE THE FAULT LIES

    To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.

    But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based “Gold OA”) is premature, as are plans by universities and research funders to pay its costs:

    Funds are short and 80% of journals (including virtually all the top, “must-have” journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

    What is needed now is for universities and funders to mandate OA self-archiving (of authors’ final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online (“Green OA”).

    That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

    The natural way to charge for the service of peer review then will be on a “no-fault basis,” with the author’s institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

    That post-Green, no-fault Gold will be Fair Gold. Today’s pre-Green (fee-based) Gold is Fool’s Gold.

    None of this applies to no-fee Gold.

    Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume — without testing it — that non-OA journals would have come out unscathed, if they had been included in the sting.

    But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

    Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record — and without paying an extra penny.

    But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA — which, with Green OA, are identical to those of non-OA.

    For some peer-review stings of non-OA journals, see below:

    Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

    Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

    Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

    Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

  15. Hello everybody.
    Perhaps I missed it, but in case I didn’t — hasn’t ANYBODY asked for a transcript of the bogus article, or some URL where you could still see it?? No one has any interest in seeing it? Really?

  16. The “effort by John Bohannon” was needed because the Sokal’s effort proved nothing to the legion of mediocrities who in the last three decades invaded science and made it untrustworthy. They still blame the messengers.

    I blame the wrong direction that science was given under the pressure of that legion and its promoters – those who made research and publication easy to accommodate mediocrities, plagiarists, demagogues and other adventurers.

    I am very pleased to see Jim Steele quoting Mark Twain: “In religion and politics people’s beliefs and convictions are in almost every case gotten at second hand…” That’s what science is these days. The object of scientific studies had always been the Nature. But now scientists took the data of others as the object of their further studies. Nothing new is being done, nothing fundamental, nothing that needs thinking. As a substitute, papers start with bureaucratic complicated language and end with claims of being on the “cutting edge”. In the middle – statistics which, then, is scrutinized by a reviewer.

    The only way out of this morass is to abolish completely the evaluation of scientific research by the number of publications and by the journal’s “impact”. I doubt this can be done by anybody (except by a dictator): those who perform the evaluations are even less competent and less capable than the authors.

    1. you are absolutely right in your last sentence, the authors are the most capable to judge their own work, and hence they must be the most responsible persons on what they submit for publication. The journals and publishers just provide a platform to publish authors work with certain sets of rules and regulations which i think needs considerable reforms. but the emphasis is on responsibility of original and authentic work, which must be borne by the authors.

    1. Had John Bohannon collaterally submitted his Science sting letter to other OA journals like he did with his fake manuscript, how many of them would/could have rejected it? or alternatively if majority of the OA journals would have rejected it, does it make Science a rotten journal then?
      It is wrong to brand a journal as rotten just based on one incidence. Several manuscripts have been retracted from journals like Science and Nature despite the so called rigorous peer-review they adopt, does this make these journals to be classified as rotten.

  17. I would like to highlight one case to bring in more clarity on this issue. Lets take the example of Celecoxib (an analgesic produced by Merck) as this case has lot of similarities to the current situation.

    Merck produced Celecoxib (read this as a manuscript produced by any Author/s) and was approved by US-FDA (Read this as Journal-Publisher or any equivalent) for use in humans. The fatal flaws with Celecoxib use in humans were documented and the drug had to be withdrawn from market with Merck and not US-FDA having to pay huge financial settlement to patients. Now the Law of USA made Merck and not US-FDA liable for this flaw. The Journals and Publishers we are mere approving bodies by adhering to certain rules and regulations just like how US-FDA functions so why is the liability solely with Journals and Publishers and not with Authors as the case should have been (if the Judgement on Celecoxib is valid).

    1. This is, in my opinion, an incorrect analogy. The New Drug Applications (NDA)-enabling studies did not uncover the risks that ultimately led to the Vioxx withdrawal. Although everyone agrees the outcome was horrific, few had predicted it, and the NDA met all “pre-publication” requirements. The AAAS “investigation,” by contrast, was fraudulent from the beginning.

      The timeline is perhaps worth revisitng. Vioxx was cutting edge pharmacology at the time, and it quite legitimately passed FDA “peer review” in May of 1999. Clinical trials to expand the label indications were then begun, which I guess is analogous top ursuing a follow-on publication. These expanded trials exposed a broader patient population to the drug than that examined in the original “publication.”. This included people with greater cardiovascular risk who were not included in the NDA-enabling trials, and who were also at risk for death from what were initially unforeseen complications of COX-2 inhibition. The real trouble began when interim results of the VIGOR trial were read out (March 2000), and Merck began the spinning and fraud.

      Furthermore, the structure scientific journal articles is bound largely by tradition and journal policies (intro, methods, results, discussion), but authors have free reign to decide what data to present and how to present it. NDAs, by contrast, are legally defined documents, and their submission occurs only after demonstration of adherence to a prescribed set of rules. The raw data in an NDA are routinely audited, and processes for ensuring data integrity and reporting standards are codified in law. I’ve never seen a peer reviewed article provide evidence of research oversight and data auditing anywhere close to those seen in properly run GLP/GCP studies in Pharma. If academic publications required that level of documentation, Pubmed would be dominated by retractions for decades.

      Perhaps the better analogy comes from the present comparison of Pharma and open access publishing. Just as Pharma is not to blame for all the ills in biomedicine, OAP is not to blame for the sorry state of academic research. Publication fraud, either in the sense you posit or in the reality of what we see every day here on RW, can ultimately cause harm to patients. Viewed from the other side of the fence, it appears that when it happens quickly and on a large scale (Pharma), tragedy is always followed by legal action. When it happens via the dribs and drabs of the scientific literature, it’s merely another statistic.

    2. If the FDA were doing a terrible, negligent job of making approval decisions, we would want to know about it. Similarly, if some supposedly peer-reviewed journals fail to conduct decent peer review, we want to know about that.

      1. Frank, its not all about knowing things or stinging, rather it is about responsibilities. Its unfair to point all the fingers at the journals/publishers, the primary responsibilities should and must be with the authors (as they are the ones who must stand by their data for its authenticity and reliability). That is the major reason why in vivox’s case Merck and not FDA was held responsible for liability.

        1. Authors bear primary responsibility for their publications, but entities claiming to be peer-reviewed journals should actually conduct genuine peer review.

          1. Frank, and the same applies to non-OA journals published by “entities” like Elsevier, Springer, Taylor and Francis, Wiley-Blackwell and others, right, who all claim peer review? I don’t support a “different-strokes-for-different-folks” approach. I would like to see a “sting” operation conducted at this scale (i.e., in excess of 300 papers, all submitted simultaneously, from false e-mails, using false authors and false institutions) with these big commercial publishers who still control the global science publishing market. I am sure we would have their entire legal teams crying foul, COPE screaming “unethical” and the ICMJE shaking their heads. Let’s be honest, this Bohannon article was just a way for Science to try and vilify the OA publishing model. The end does NOT justify the means. Just because you want to expose or show fraud does not give you the right to use fraud to show fraud. That is like saying to try and show that a person is a criminal that you now have to use a gun on them to prove it. Just because a person does not pull the trigger doesn’t give them the right to pull the gun on anyone, unless they are the police and feel threatened. I don’t think the AAAS has special police powers, or Bohannon for that matter. What is really amazing is that COPE, the AAAS and Bohannon have not come here to this blog to actually respond to the critics and to the fans in a public setting. I am quite sure they are reading this blog, so why the silence? Finally, regarding this science vs journalism discussion. Let’s be absolutely clear here, what Bohannon did was not investigative journalism. It was irresponsible journalism. If he was a real doctor (in the PhD sense), he would know that to prove the bad aspects of the OA journals without having to revert to fraud. I maintain that the AAAS, Science and Bohannon were 1) irresponsible, 2) not thorough or precise in their “sting”; 3) biased in their selection of journals simply because Bohannon was impatient and had no time; 4) fraudulent for using false data, false names, false addresses, false institutions. I can tell you know that if Bohannon were to use his smart tactics to try and show that the US Government was corrupt, would he be sitting in his “journalistic” office. I doubt it. I maintain, there should be an outcry.

          2. JATdS,

            I reckon you are way “over the top” in your enthusiasm in claiming journalist John Bohannon has done something wrong. I think Bohannon and Science’s wonderfully revealing hoax is a timely and profound wake-up call for “science” to lift its game. Let’s hope this new evidence that published “science” is hopelessly unreliable prompts new across-the-board global efforts in competent “peer review” before – not after – publication.

            You argued earlier that: “The correct way to prove the lack of academic level of a journal is through post-publication peer review”, but I can tell you that post-publication peer-review is a time-consuming and largely fruitless task: http://www.australianparadox.com/pdf/GraphicEvidence.pdf

            JATdS, you worry that Bohannon and co. were “biased” in their selection of journals”. You appear to be complaining that they targeted only journals that might be pay-as-you-publish-whatever-you-like “open access” journals and not also “subscription model” journals as well.

            You appear to think that if Bohannon and co. had targeted subscription journals as well, Bohannon’s clownish cancer paper would have been formally accepted for publication by heaps of them too. Alas, I’m guessing you probably are right.

            JATdS, what is now clear to even the most casual observer is that extraordinary faulty papers with false “findings” can be published very easily in formal “science” journals.

            With consistent quality control AWOL, how can non-scientists take any published “science” seriously?

            And why should taxpayers keep funding the nonsense-based research that is going on in our universities?

          3. Of course it applies to non-OA journals as well (in fact, I argued above that it was bogus to blame the results on open access). I was addressing the point in the post that I was responding to (the FDA analogy), not this unrelated point.

        2. Nobody even looked into the papers before publication. If the publishers do not want to be jeopardized, they should directly claim they will publish anything given you pay on their website. Reading one of the generated papers should make the situation clear.

  18. With the risk that you are going to spend a whole day reading all of the Supplementary Files, the e-mail correspondence attached to this body of work is an anthropological study in and by itself.

    http://scicomm.scimagdev.org

    The fake authors are extremely blunt and quite opinionated, and I think part of this correspondence raises entirely new questions. For instance, one of the journals asked the authors to suggest 5 reviewers, to which they respond that they find it ‘unethical’ to suggest reviewers.

    Other threads have the authors complaining about the submission system, when according to the editors the authors are simply just not adhering to the rules and reading the FAQs.

    In this case the Supp Data are a must read.

    1. This from one of the spoofed editors just came into my inbox this morning. NOte how all the “here” links are vacant and don’t link to anything. And, indeed, run by India, with branches in the UK and the US, so kudos to the author of the Science paper for spotting them: From: Managing Editor (SDI) To: REDACTED Sent: Monday, October 7, 2013 6:11 AM Subject: OPEN peer reviewed British Biotechnology Journal Dear Colleague, British Biotechnology Journal (BBJ) is an OPEN peer-reviewed, OPEN access, INTERNATIONAL journal, inspired from the great OPEN Access Movement. We offer both Online publication as well as Reprints (Hard copy) options. Article Processing Charge is only 50 US$ as per present offer. This journal is at present publishing Volume 4 (i.e. Fourth year of operation). 2. Transparent and High standard Peer review: In order to maintain highest level of transparency and high standard of review, this journal presently follows highly respected and toughest Advanced OPEN peer-review system (Example Link1, Link2, Link3, Link4, Link5, Link6, Link7, Link8, Link9, Link10, Link11, etc). We hope that you will appreciate this Advanced OPEN peer-review system, which is expected to give doubtless scholarly benefit and impact to the authors in long run. Additionally we strongly encourage and promote “Post-publication Peer review” by our comment section. As per a recent report (Link) of Science journal (present Impact factor 31), one of our journal passed a stringent test of quality of Peer review by rejecting a fake article (Link1, Link2, Link3). We applaud the dedication and hard-work of our peer reviewers and editors to maintain the high standard of our journals. It was reported that only few journals (20), out of total 304 journals tested, rejected the fake article after substantial peer review. We are happy that our journal was among these few successful journals along with industry leaders like PLoS One, Hindawi, etc. We believe that the result of this experiment also proved the efficacy of our Advanced OPEN peer review and ‘post publication’ peer review system. Though the report is debated, as it did not include subscription journals, we normally support any effort to improve the quality and transparency of peer review. 3. Proposed Time Schedule: Submission to first editorial decision with review comments: 3 weeks Submission to publication: 6 weeks State-of-the-art ‘running issue’ concept gives authors the benefit of ‘Zero Waiting Time’ for the officially accepted manuscripts to be published. 4. Abstracting/indexing: Many respected abstracting/indexing services covered our journals. ISI Thomson Reuters (Only for ARRB) ProQuest (Screen shot) HINARI (United Nation’s Database) CAB abstract (UK) EBSCOhost (USA) (Mail confirmation link) AGORA (United Nation’s FAO database) Google scholar DOAJ SHERPA/RoMEO (UK) OARE (United Nations Environment Programme (UNEP), Yale University, etc.) Ulrich’s Zentralblatt MATH (only for BJMCS) CrossRef Chemical Abstracts Service (“CAS”) (Mail confirmatiom link) For more information please visit here. 5. Authors’ profile: Considering high peer review standard, quality control, etc. our journals have been chosen by academicians of many famous universities, institutes, etc. A glimpse of authors’ profile is provided here. 6. Testimonials: Appreciation of our esteemed satisfied authors is the greatest inspiration behind the hard-work of our editorial team. Some of the testimonials are available here. 7. Article Processing Charge (or Publication Charge): Article Processing Charge (or Publication Charge): Manuscript submitted within 1st July, 13 — 30th September, 2013 will be eligible for 90% discount on normal Article Processing Charge (APC) of 500 USD. (i.e. Effective APC: 50 USD). For more information visit here. 7.1. Reprints (Hard copy): Reprints (Hard copy) are also available at extra cost. For detailed information please see here (Reprint information link). 8. Sample papers: 1. Antibacterial and Antiviral Activities of Essential Oils of Northern….. 2. African Cassava: Biotechnology and Molecular Breeding to the Rescue 3. Growth Inhibition of Some Phytopathogenic Bacteria by Cell-Free Extracts from Enterococcus sp 4. Primary Somatic Embryos from Axillary Meristems and Immature Leaf Lobes of Selected .. 5. Effects of Initiating Antihypertensive Therapy with Amlodipine or Hydrochlorothiazide on Creatinine Clearance in Hypertensive Nigerians with Type 2 Diabetes Mellitus 6. Preparation of Protein Extraction from Flower Buds of Solanum lycopersicum for Two-Dimensional Gel Electrophoresis 7. Diversity of Bacterial Community in Fermentation of African Oil Bean Seeds (Pentaciethra macrophylla Benth) by comparison of 16S rRNA Gene Fragments 8. The Application Development of Plant-Based Environmental Protection Plasticizer 9. Genetic Variability, Heritability and Genetic Advance in Pearl Millet (Penisetum glaucum [L.] R. Br.) Genotypes 9. Highly qualified Editors: Prof. Y. Dai, Associate Director of Research, Revivicor Inc. Blacksburg, USA Prof. Viroj Wiwanitkit, Department of Laboratory Medicine, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand Dr. Jean-Marc Sabatier, Université de la Méditerranée-Ambrilia Biopharma inc., France Dr. Robert L. Brown, Food and Feed Safety Research Unit, USDA-ARS-SRRC, New Orleans, USA Dr. Giuseppe Novelli, Università di Roma Tor Vergata, Rome, Italy Dr. Juan Pedro Navarro – Aviñó, Technical University of Valencia, Spain Dr. Nikolaos Labrou, Department of Agr. Biotechnology, Agricultural University of Athens, Greece. 10. Manuscript submission Option 1: Online submission (recommended): Subcentral (http://www.sciencedomain.org/login.php) Option 2: Email attachment to the editorial office at [email protected]. General Guideline for Authors: http://www.sciencedomain.org/page.php?id=general-guideline-for-authors To download MS word SDI paper template click here To download SDI Manuscript Submission form click here To download Latex paper template click here with regards, Ms. Samapika Mondal British Biotechnology Journal : An OPEN peer reviewed journal http://www.sciencedomain.org; E-mail: [email protected] Reg. Office: UK: SCIENCEDOMAIN international, Third Floor, 207 Regent Street, London, W1B 3HH,UK,Registered in England and Wales, Company Registration Number: 7794635, Fax: +44 20-3031-1429 USA: SCIENCEDOMAIN international, One Commerce Centre, 1201, Orange St. # 600, Wilmington, New Castle, Delaware, USA, Corporate File Number: 5049777, Fax: +1 302-397-2050 India: SCIENCEDOMAIN international, U GF, DLF City Phase-III, Gurgaon, 122001, Delhi NCR, Corp. Firm Registration Number: 255 (2010-11), Fax: +91 11-66173993 You have received this e-mail in the genuine belief that its contents would be of interest to you. If you would like to unsubscribe and stop receiving these emails click here.

      1. I have got same mail in my inbox. In my case all links are working properly. It seems from the links of the mail that this publisher is following open peer review.

        The mail body is here:

        Dear Colleague,

        British Biotechnology Journal (http://www.sciencedomain.org/journal-home.php?id=11) (BBJ) is an OPEN peer-reviewed, (http://www.sciencedomain.org/page.php?id=sdi-general-editorial-policy#SDI_Peer_review_mechanism) OPEN access, INTERNATIONAL journal, inspired from the great OPEN Access Movement. We offer both Online publication as well as Reprints (Hard copy) options. Article Processing Charge is only 50 US$ as per present offer. This journal is at present publishing Volume 4 (i.e. Fourth year of operation).

        2. Transparent and High standard Peer review:
        In order to maintain highest level of transparency and high standard of review, this journal presently follows highly respected and toughest Advanced OPEN peer-review system (http://www.sciencedomain.org/page.php?id=sdi-general-editorial-policy#SDI_Peer_review_mechanism) (Example Link1 (http://www.sciencedomain.org/review-history.php?iid=170&id=11&aid=790), Link2 (http://www.sciencedomain.org/review-history.php?iid=135&id=11&aid=619), Link3 (http://www.sciencedomain.org/review-history.php?iid=135&id=11&aid=579), Link4 (http://www.sciencedomain.org/review-history.php?iid=175&id=11&aid=825), Link5 (http://www.sciencedomain.org/review-history.php?iid=175&id=11&aid=826), Link6 (http://www.sciencedomain.org/review-history.php?iid=175&id=11&aid=832), Link7 (http://www.sciencedomain.org/review-history.php?iid=170&id=11&aid=786), Link8 (http://www.sciencedomain.org/review-history.php?iid=170&id=11&aid=787), Link9 (http://www.sciencedomain.org/review-history.php?iid=170&id=11&aid=788), Link10 (http://www.sciencedomain.org/review-history.php?iid=170&id=11&aid=789), Link11 (http://www.sciencedomain.org/review-history.php?iid=135&id=11&aid=578), etc). We hope that you will appreciate this Advanced OPEN peer-review system (http://www.sciencedomain.org/page.php?id=sdi-general-editorial-policy#SDI_Peer_review_mechanism), which is expected to give doubtless scholarly benefit and impact to the authors in long run. Additionally we strongly encourage and promote “Post-publication Peer review” by our comment section.

        As per a recent report (Link (http://www.sciencemag.org/content/342/6154/60.full)) of Science journal (http://www.sciencemag.org/content/342/6154/60.full) (present Impact factor 31), one of our journal passed a stringent test of quality of Peer review by rejecting a fake article (Link1 (https://docs.google.com/file/d/0BzvkYtxLTJZjYzlCWVBLS2VKbUk/edit?usp=sharing), Link2 (http://www.sciencedomain.org/documents/Science-SDI.jpg), Link3 (http://scicomm.scimagdev.org/)). We applaud the dedication and hard-work of our peer reviewers and editors to maintain the high standard of our journals. It was reported that only few journals (20), out of total 304 journals tested, rejected the fake article after substantial peer review. We are happy that our journal was among these few successful journals along with industry leaders like PLoS One, Hindawi, etc. We believe that the result of this experiment also proved the efficacy of our Advanced OPEN peer review and ‘post publication’ peer review system. Though the report is debated, as it did not include subscription journals, we normally support any effort to improve the quality and transparency of peer review.

        3. Proposed Time Schedule:
        Submission to first editorial decision with review comments: 3 weeks
        Submission to publication: 6 weeks
        State-of-the-art ‘running issue’ concept gives authors the benefit of ‘Zero Waiting Time’ for the officially accepted manuscripts to be published.

        4. Abstracting/indexing:
        Many respected abstracting/indexing services covered our journals.
        ISI Thomson Reuters (Only for ARRB)
        ProQuest (Screen shot)
        HINARI (United Nation’s Database)
        CAB abstract (UK)
        EBSCOhost (USA) (Mail confirmation link)
        AGORA (United Nation’s FAO database)
        Google scholar
        DOAJ
        SHERPA/RoMEO (UK)
        OARE (United Nations Environment Programme (UNEP), Yale University, etc.)
        Ulrich’s
        Zentralblatt MATH (only for BJMCS)
        CrossRef
        Chemical Abstracts Service (“CAS”) (Mail confirmatiom link)
        For more information please visit here.

        5. Authors’ profile:
        Considering high peer review standard, quality control, etc. our journals have been chosen by academicians of many famous universities, institutes, etc. A glimpse of authors’ profile is provided here (http://www.sciencedomain.org/page.php?id=author-profiles).

        6. Testimonials:
        Appreciation of our esteemed satisfied authors is the greatest inspiration behind the hard-work of our editorial team. Some of the testimonials are available here (http://www.sciencedomain.org/page.php?id=authors-speak).

        7. Article Processing Charge (or Publication Charge):

        Article Processing Charge (or Publication Charge): Manuscript submitted within 1st October, 13 — 31st December, 2013 will be eligible for 90% discount on normal Article Processing Charge (APC) of 500 USD. (i.e. Effective APC: 50 USD). For more information visit here (http://www.sciencedomain.org/page.php?id=publication-charge).

        7.1. Reprints (Hard copy):
        Reprints (Hard copy) are also available at extra cost. For detailed information please see here (Reprint information link (http://www.sciencedomain.org/page.php?id=article-reprints)).

        8. Sample papers:
        1. Antibacterial and Antiviral Activities of Essential Oils of Northern….. (http://www.sciencedomain.org/abstract.php?iid=217&id=11&aid=1352)
        2. African Cassava: Biotechnology and Molecular Breeding to the Rescue (http://www.sciencedomain.org/abstract.php?iid=217&id=11&aid=1351)
        3. Growth Inhibition of Some Phytopathogenic Bacteria by Cell-Free Extracts from Enterococcus sp (http://www.sciencedomain.org/abstract.php?iid=217&id=11&aid=1400)
        4. Primary Somatic Embryos from Axillary Meristems and Immature Leaf Lobes of Selected .. (http://www.sciencedomain.org/abstract.php?iid=217&id=11&aid=1210)
        5. Effects of Initiating Antihypertensive Therapy with Amlodipine or Hydrochlorothiazide on Creatinine Clearance in Hypertensive Nigerians with Type 2 Diabetes Mellitus (http://www.sciencedomain.org/abstract.php?iid=175&id=11&aid=832)
        6. Preparation of Protein Extraction from Flower Buds of Solanum lycopersicum for Two-Dimensional Gel Electrophoresis (http://www.sciencedomain.org/abstract.php?iid=183&id=11&aid=1155)
        7. Diversity of Bacterial Community in Fermentation of African Oil Bean Seeds (Pentaciethra macrophylla Benth) by comparison of 16S rRNA Gene Fragments (http://www.sciencedomain.org/abstract.php?iid=183&id=11&aid=1202)
        8. The Application Development of Plant-Based Environmental Protection Plasticizer (http://www.sciencedomain.org/abstract.php?iid=217&id=11&aid=1209)
        9. Genetic Variability, Heritability and Genetic Advance in Pearl Millet (Penisetum glaucum [L.] R. Br.) Genotypes (http://www.sciencedomain.org/abstract.php?iid=175&id=11&aid=827)

        9. Highly qualified Editors:
        Prof. Y. Dai, Associate Director of Research, Revivicor Inc. Blacksburg, USA
        Prof. Viroj Wiwanitkit, Department of Laboratory Medicine, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
        Dr. Jean-Marc Sabatier, Université de la Méditerranée-Ambrilia Biopharma inc., France
        Dr. Robert L. Brown, Food and Feed Safety Research Unit, USDA-ARS-SRRC, New Orleans, USA
        Dr. Giuseppe Novelli, Università di Roma Tor Vergata, Rome, Italy
        Dr. Juan Pedro Navarro – Aviñó, Technical University of Valencia, Spain
        Dr. Nikolaos Labrou, Department of Agr. Biotechnology, Agricultural University of Athens, Greece.

        10. Manuscript submission
        Option 1:
        Online submission (recommended): Subcentral (http://www.sciencedomain.org/login.php)
        Option 2:
        Email attachment to the editorial office at [email protected].
        General Guideline for Authors: http://www.sciencedomain.org/page.php?id=general-guideline-for-authors
        To download MS word SDI paper template click here
        To download SDI Manuscript Submission form click here
        To download Latex paper template click here

        with regards,
        Ms. Samapika Mondal
        British Biotechnology Journal : An OPEN peer reviewed journal
        http://www.sciencedomain.org; E-mail: [email protected]
        Reg. Office:
        UK: SCIENCEDOMAIN international, Third Floor, 207 Regent Street, London, W1B 3HH,UK,Registered in England and Wales, Company Registration Number: 7794635, Fax: +44 20-3031-1429
        USA: SCIENCEDOMAIN international, One Commerce Centre, 1201, Orange St. # 600, Wilmington, New Castle, Delaware, USA, Corporate File Number: 5049777, Fax: +1 302-397-2050
        India: SCIENCEDOMAIN international, U GF, DLF City Phase-III, Gurgaon, 122001, Delhi NCR, Corp. Firm Registration Number: 255 (2010-11), Fax: +91 11-66173993

        You have received this e-mail in the genuine belief that its contents would be of interest to you.

  19. Provided they are indexed on pubmed I am all for dodgy low peer reviewed OA journals – or rather I would be if I hadn’t been a whistleblower and subsequently excluded from a scientific career.

    I was interested in the comments on Bruce Spiegelman and claims that he has a history of discovering new messenger molecules that were poorly reproducible by groups not in collaboration with him. I am unable to comment on the quality of Spiegelmann’s work, but the outlines of the complaint seemed fairly similar to observations I had privately formed of another high profile research at a prestigious institution in the US – constantly publishing breakthroughs on compounds that it my hands seemed to be fairly inert. I don’t think this person’s papers are totally fake, but I do think they have been considerably over-egged – and where the dividing line falls in his publications between reliable and over-interpreted was never clear to me . The point is you can’t unambiguously accept any publication – whether in Science, Nature or the International Journal of Dodgy Brothers. You have to read it, evaluate against the rest of the literature, cross check against your own experience and your view of their publishing history and once you have reached a judgement be aware this assessment can only be tentative.

    In the case of this spoof, I would probably file it among the enormous quantities of publications reporting significant effects of plant secondary metabolites that generally sink without a trace – the majority of which have appeared in high quality journals.

    Stringent peer review arose in response to a situation when there were limits on the publication space and limits on the capacity of minds to process information. Digital technology means there is effectively no limit on the publication sphere and automated indexing technologies means we can effortless search this space to find the publications that concern us.

    Having low cost publications operating out of India, Philippines and Belize with minimal oversight and quick and effortless publication is unambiguously a good thing and would like to personally congratulate all those journals who agreed to publish the spoof. Do the experiments, write it up and post it quickly in dodgy OA journal and let the work stand or fall on its merits.

    Anyone who has attempted to contact a high quality journal concerning misconduct knows the system is broken. The only way to fix it is to introduce some measure of reproducibility into the system. Until then a bit of the Wild West as represented by the International Journal of Pharmcoproteomics and Metabolinformatics operating out of Hyderabad is to be welcomed.

    1. Littlegreyrabbit, I think you need to look VERY carefully at the Bohannon integrative global map again. There are, in my interpretation of his biased sample set, four centers of “fraudulent”, “predatory” (or whatever you want to call them) publishers: India, Nigeria, the US and the UK. A fascinating mega-cluster in the UK. In some ways, this corresponds to what Max Keiser of the Keiser Report (www.rt.com) has been describing for quite some time as the fraud capital of the world. Publishing and the current OA boom is not about academics, it’s about easy money and profit. This does not mean that the OA movement is itself all corrupt, but the “clean” sector and players are becoming very diffused. Quite the contrary, what great free information the OA movement has provided to the scientific community. But the threat lies in the fact that fraudulent non-academic publishers who want to receive money for publishing anything instantly are being referenced by scientists in legitimate journals. This is causing a serious pollution of the scientific literature. VERY rapidly. One publisher (I cannot name it yet) has just completed a study showing how less than 1% of the Jeff Beall listed journals appeared in the reference section in about 2007 but how that number had grown to more than 15% of all reference lists by 2013 simply by virtue of the fact that many scientists from developing countries are using the internet to look for studies that are free and OA to reference in their studies. Most people think that this fraud is coincidental. It’s not. Ask yourself: Why is the Indian ISSN assigning ISSN numbers to batches of 200 or 300 journals at a time to journals published by Indian publishers? Are the clusters of fraudulent OA journals operating out of the US and the UK actually run by US or UK citizens? Who are the banks who are assisting with the collection of APCs and if indeed the journals are involved with fraud, then why do the US and UK governments not sanction the banks, or the publishers? Why does Iran have so many fraudulent OA publishers and a massive OA boom alongside Turkey, and is this linked to the socio-political-cultural-economic divide implemented by the West? How do the other BRICS members, who seek economic independence from the global currency, the US$, think about the Indian model of fraudulent OA publishing? Does South Africa, which has a very stringent quality control system (e.g., the ARC) agree with sharing the platform in science with a BRICS member that basically supports this fraudulent model? Too many comments are looking at the sensationalistic issues. What Bohannon does is use sensationalism to prove a point and uses fraud as his choice method of investigation. Then, Science has used its massive power of marketing to spread the word of this OA fraud to news outlets, scientists, blogs, etc. The problems are multiple and complex, but using fraud to expose the rot of the Jeff Beall list of possible predatory publishers is not the correct or ethical solution. This is not about conspiracy theories. It’s about basic economics and the interpretation of the ethics code to suit ones objectives.

    2. “I would be if I hadn’t been a whistleblower and subsequently excluded from a scientific career”

      is there a blog or book where you’ve elaborated on this whistleblowing? I’d be interested to hear the story.

      1. Me too. How come you blew the whistle on some obviously powerful people but sign here as a rabbit? I sign my name, I want my story be known not only to the powerful people, but to everybody, brother rabbit!

  20. Why don’t publishers start their own research institutions and hire their authors where they can provide data oversight and be accountable for the information they disseminate? And why can’t academics and universities make profit by publishing their research findings in their repositories ? What is wrong with such a model? We should all demand for accountability , wouldn’t it be better than the status quo?

    1. Exactly, I have been suggested that idea by a friend, and I think that would be an ideal solution… maybe in another dimension

      1. Probably, over 99% of research is done on public money, but suddenly the results are transferred to private hands – the publishers who then sell them back to the public. And this is when authors actually present the results in a publishable form, and what remains to do is to press the “PUBLISH” button once.

      2. More… If you think carefully why the peer review is needed, you will see that it only needed to preserve the scientific reputation of the journal. And, abolishing the peer review will help to see what you always want to see – what the author actually has done.

  21. An afterthought: Many commentators talked about fake papers, whether they’re presented as the real thing (as, for instance, in Stapel’s case) or to prove a point (as in Sokal’s and Bohannon’s cases), as polluting the scientific record. You are quite right. Just do a check on Jan Hendrik Schön’s papers on Scopus: Schön, who had to retract 9 papers in Science, 7 in Nature, and many others in physics journals, is still being cited for the retracted papers! If you also check who cited the retracted work in recent years, you notice two things: First, many of the journals in which the citing authors publish are low-tier journals, and in some cases edited by national science associations in countries such as China (I assume a few are also on Bohannon’s list). Second, the citing authors appear to come from countries in which the free and fully funded dispersion of scientific journals and scientific information may be a problem and who therefore may never have noticed that the papers were retracted.

    Yes, pollution of the scientific record does exist. But at least in Schön’s case, it’s been high-profile journals like Science whose review process failed spectacularly, not once, but many times over. I can’t escape the impression that the Bohannon sting is designed to detract from this fact.

    1. I am surprised that the AAAS, John Bohannon and Science have not yet come forward with an open discussion to address real concerns about his study. If Bohannon claims to be a journalist, then he needs to adhere to responsible journalistic principles. And if he claims to understand science (and indeed he holds a PhD), then he should be held to the same level of critique in his article as a scientist woudl be in a scientific paper. Silence in fact confirms criticisms made by critics. I guess the only way to ensure a response is by publishing the concerns. Despite the hype surrounding this paper, the methodology is seriously problematic (from a scientist’s perspective). He wasted valuable time and resources of potentially many valable publishers’ time and a price has to be paid for that cancelled experiment. I will keep posting a reminder on this blog every week if I have to in order to invoke a response.

    2. I think it is somewhat contentious to claim that Science’s review process failed in the Schön case. Schön reported stuff that had been predicted theoretically. It did not contain claims that even someone outside the field should be able to identify as wrong, as in Bohannon’s fake paper.

      1. One month later, Bohannon and the AAAS sit in silence. Is this the way that responsible scientists and/or journalists and publishers are supposed to behave in the face of criticism?

  22. JATds,

    I reckon Bohannon’s fake paper was a clever way of exposing clearly to the world the fact that much “peer reviewed” science is not subjected to competent quality control, contrary to popular belief. It turns out that nonsense-based papers – funded by taxpayers – are published all over the place all the time. Bohannon has done taxpayers around the globe a great service by exposing this expensive and dangerous “publish or perish” charade.

    If you really are concerned about how “responsible scientists” are NOT “supposed to behave in the face of criticism”, I reckon you should turn your attention to the University of Sydney’s Australian Paradox scandal.

    Written by globally influential scientists – http://www.australianparadox.com/pdf/diabetes.pdf – the extraordinarily faulty MDPI-published Australian Paradox paper is, in my opinion, both an academic disgrace and a menace to public health: http://www.australianparadox.com/pdf/Letter-UoS-Academic-Board.pdf ; http://www.australianparadox.com/pdf/Uni&SugarAustraliaPRsugar.pdf

  23. Recently an interview with John Bohannon, Post Open Access Sting, was published here (http://scholarlykitchen.sspnet.org/2013/11/12/post-open-access-sting-an-interview-with-john-bohannon/#comment-116932). Some useful information and actions were recorded there as
    1. Decision of OASPA to terminate the membership of Dove Press and Hikari, as a result of the sting investigation.
    2. I also support that as a result of Bohannon investigation, DOAJ has removed 114 OA journals from its list.

    Weeding is always necessary. OASPA and DOAJ is taking action to correct its list. Nobody is perfect and revision is always necessary once a kind of ‘peer-review report’ is available. But does anybody know that J Beall has taken any action to correct his famous list? The sting operation and all related discussions on internet is inclined to highlight who failed in this experiment. It is not telling or highlighting about those publishers who passed this experiment but still occupy the seat in Beall’s famous list. Phil Davis also reported that Beall is falsely accusing nearly one in five as being a ”potential, possible, or probable predatory scholarly open access publisher”. (Reference: http://scholarlykitchen.sspnet.org/2013/10/04/open-access-sting-reveals-deception-missed-opportunities/)

    I thought If “acceptance” of the fraud paper is sufficient enough for terminating the membership of 2 publishers of OASPA and cancellation of 114 OA journals from DOAJ, then “Rejection with robust peer review” should be also sufficient enough for the opposite. I forgot that being a GOLD OA publisher means you should be ready for punishment for single mistake. YES THEY MUST BE PUNISHED. I also forgot that if a small GOLD OA publisher shows evidence of ‘robust peer review’ it must be accidental. It must have to be accidental. I forgot that these small GOLD OA publishers have no right to show good intention to improve from previous error, no right to show the evidence of ‘robust peer review’. Once they are stamped as predatory they must have to be predatory for their life time. (I also forgot that if these predatory small gold OA publishers improve and try to become good publisher, then Anti-OPEN access brigade will lose its most powerful weapon till date).

    1. I have read the Bohannon interview published in the scholarlykitchen very attentively. It is a pity that he could not provide a comment on this blog, where some critics wait for a response. We are not journalists, we are scientists. Very unfortunately, Dr. Bohannon still seems to think that, as a journalist, he is allowed to use fraud to beat fraud. As if the world of journalsists’ ethics is a parallel universe to that of science ethics. No, “Dr.” Bohannon, when you submit a paper, whether honest, or fraudulent like your 304 papers, you have the responsibility of respecting publishing ethics as we all do. On this blog, as elsewhere, we call out thefraudulent scientists who submit fraudulent, fake or false data sets, like your 304 papers, so you, too should now take responsibility (as you take laurels) for your “sting”. Most of these journals write, on their ethics pages, that a submission must be ORIGINAL. This means that, independent of whatever conclusion you have drawn (before or after your “sting”), that your first act was fundamentally wrong. Just because you have lawyers and one of the world’s top publishers on your side, doesn’t make your actions ETHICALLY correct. I must admit, though, that you have ruffled alot of feathers and rattled alot of bones. I just checked one of the Bentham Open journals that accepted one of your fraudulent papers, and actually the site gave me a 404 error. The journal wasn’t even listed on the master journal list anymore. So, valid or not, you have already caused irreparable damage, some valid, some not. You have also thrown the scientific world into some level of chaos, where no-one trusts anyone. Actually, in this respect, I praise you. For one simple reason, I have often said, in my own preachings, that in order to establish a new system, we have to totally erase the old one. That means that the current traditional model is dysfunctional (or barely surviving behind all the plastic marketing) while the open access movement is in rapid decay. Both must be obliterated to establish something new. What is apparent is that there is going to be a wave of copycats. Imagne now we have scientists who pose as journalists, what is now preventing them from submitting 204 fake, fraudulent papers, with false names and e-mails, to 304 valid publishers lke Elsevier, Springer, Wiley, Taylor and Francis, etc? What you and Science have in essence proclaimed is that the age of “stings” is ethically and legally OK and that anyone now who claims to be a Dr. (for example in molecular biology) can submit 304 fake journals to test the “validity” and “honesty” of the system. Personally, I really detest some of the open access scamming publishers that exist on the Beall list, but I can assure you that some of the established publishers have an ethical and editorial rot as deep and as wide.

      1. PS: I challenge COPE, the ICMJE, WAME, the ORI, Science, Miguel Roig, Liz Wager and others who so avidly defend ethics in science publishing to please come forward to provide a FORMAL position on such stings and the ethics of such stings. Even with 125 responses, not one of these entities has posted a public statement on this blog or on their web-pages. What are they waiting for? Or are they afraid of Science? I also invite the established publishers like Elsevier, Springer, Wiley, Taylor and Francis (and others) to state their positions regarding sting operations like this one. In particular, the following important questions should be answered:
        a) Is it ethical to submit a paper with false data, false authors and using false e-mails even when the instructions for authors clearly state that false submissions are unethical?
        b) Is it morally correct to waste time, human resources and the editorial system to prove a point that was already known?
        c) Should journalists have “superior” ethics than other scientists when it comes to submitting papers to journals?

  24. Are predatory journals sneaking into PubMed and PubMed Central?
    According to Wikipedia and references therein, “the U.S. National Institutes of Health does not accept OMICS publications for listing in PubMed Central and sent a cease-and-desist letter to OMICS in 2013, demanding that OMICS discontinue false claims of affiliation with U.S. government entities or employees.”
    https://en.wikipedia.org/wiki/OMICS_Publishing_Group#cite_note-science-7
    Thus, it is surprising to see select articles to somehow sneak into both PubMed and PMC. Example: J. Spine Neurosurg. is one of the OMICS journals
    http://omicsonline.org/open-access-journals-list.php
    listed as Spine and Neurosurgery under the Neurology and Psychiatry journal section. Yet, it has managed to get a few select articles into PubMed, including deposition in the PMC database as well!
    http://www.ncbi.nlm.nih.gov/pubmed/?term=J.+spine+neurosurg.
    Wonder how many on the Jeff Beall list are thus making it into PubMed? It appears that somehow select articles from the Omics journals are making it into PubMed and getting PMCID numbers. Perhaps from other such journals as well. NIH and fellow scientists beware!

    1. Sometimes when I see the questions JATdS asks, I fully understand why people don’t want to respond to him.

      This one is a classic, as JATdS is putting scientific demands on a journalistic endeavour. Thus, several questions are completely irrelevant, e.g. those about the co-authors and about the Science paper having to be retracted for unethical science.

      Take also for example question 5. That one is a strange one, since the affiliations of all people *are* given here
      http://www.sciencemag.org/content/342/6154/60/suppl/DC1

      Or take the name of Ffolliott Martin Fisher. Anyone can google the name, and the very first hit you get is https://connects.catalyst.harvard.edu/Profiles/display/Person/81606
      Ffolliott is an uncommon name, especially as first name, but if that is the name he uses, it is the correct name to use.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.