“Unfortunately, scientific publishing is not immune to fraud and mistakes”: Springer responds to fake papers story

springerWe have an update on the story of 120 bogus papers being removed by IEEE and Springer. The latter posted a statement earlier today, which we include in its entirety below:

As reported in the media, on 11 February 2014 we were alerted to 16 fake submissions that were published in conference proceedings in Springer publications, mostly in computer sciences and engineering. The submissions were generated by the SCIgen computer program, which creates nonsense documents. We were alerted to this fact by Dr Cyril Labbé, a French researcher who has written an article on how to detect SCIgen-generated papers in the Springer journal Scientometrics in January 2013.

We are in the process of taking the papers down as quickly as possible. This means that they will be removed, not retracted, since they are all nonsense. A placeholder notice will be put up once the papers have been removed.

Furthermore, we are looking into our procedures to find the weakness that could allow something like this to happen, and we will adapt our processes to ensure it does not happen again.

For the moment, we are using detection programs and manpower to sift through our publications to determine if there are more SCIgen papers. We have also reached out to Dr. Labbé for advice and ffcollaboration on how to go about this in the most effective manner.

Since we publish over 2,200 journals and 8,400 books annually, this will take some time. We are confident that, for the vast majority of the materials we publish, our processes work. When flaws are detected by us, or brought to our attention by members of the scientific community, we aim to correct them transparently and as quickly as possible.

There will always be individuals who try to undermine existing processes in order to prove a point or to benefit personally. Unfortunately, scientific publishing is not immune to fraud and mistakes, either. The peer review system is the best system we have so far and this incident will lead to additional measures on the part of Springer to strengthen it.

Once we have further information, we will post it on springer.com.

This is an honest and straightforward approach that a lot of publishers could learn from. Some readers might ask what we think of the fact that the papers won’t be officially retracted, but just removed. As far as we’re concerned, it’s far more important to give the reader as much information as possible than to worry about how something is labeled. And these are faked papers, not simply flawed. So presuming the placeholder notices explain what happened, that seems like an entirely reasonable approach.

Related: Should readers get a refund when they pay to access seriously flawed papers?

45 thoughts on ““Unfortunately, scientific publishing is not immune to fraud and mistakes”: Springer responds to fake papers story”

  1. Can you please indicate the exact name of the Springer official who provided this information? The secrecy behind who wrote this message is precisely the lack of transparency about Springer that they claim not to have. The statement “When flaws are detected by us, or brought to our attention by members of the scientific community, we aim to correct them transparently and as quickly as possible.”, based on my experience with plant science papers published bv Springer, is absolutely false.

  2. What they don’t address though is how stuff could be published that was never read by a competent human being. If they just publish bulk conference abstracts without ever reading them, I do wonder what they are paid for.

    1. Indeed. If — and only if (?) — these are so-called “peer-reviewed” conferences, Springer-Verlag and IEEE SHOULD put the evidence on the table and also reveal WHO was responsible for approving this non-sense. This is not just about the non-sense, though, but very much also about the integrity of the publishing industry — and I fear, even some scientists involved in this huge mess.

  3. “For the moment, we are using detection programs and manpower to sift through our publications to determine if there are more SCIgen papers. We have also reached out to Dr. Labbé for advice and ffcollaboration on how to go about this in the most effective manner.”

    Sure. They’ll install a computer program that will be able to recognize computer-generated papers from human-written ones. That will fix the problem??? The problem is that if so many pure nonsense papers were accepted, we have no reason whatsoever to believe that any paper ever passed any quality check. These nonsense papers are only the tip of an iceberg. That’s the real issue. I hope that Labbe will refuse to collaborate with that approach of a merely superficial cleaning up.

    “We are confident that, for the vast majority of the materials we publish, our processes work.”

    There are ways of testing that hypothesis, rather than just believing in it. So far we only know that at least 16 nonsense papers went through. 16 is probably a tiny fraction of all papers published but that is not the relevant comparison. The relevant comparison is to know how many such papers were submitted, how many were rejected and how many were accepted. This requires a very detailed analysis. Unfortunately, only Springer can do it. I suspect there weren’t that many rejections. If there were, Springer should show them to the public.

  4. The larger issue here relates to conference proceedings in general. In most cases they are unreviewed, and their value is minimal. The situation is particularly bad in computer science, where the general story is (1) a group of organizers decide to hold a conference in some nice vacation spot; (2) lots of corporate employees attend in order to have fun; (3) they write up some bullshit and submit it to the proceedings in order to justify attending, knowing that nobody will ever look at it; (4) the organizers pay a journal to publish the proceedings in order to justify the scientific value of the conference. The response from Springer is really hypocrisy — they know full well what is going on.

    1. “The larger issue here relates to conference proceedings in general. In most cases they are unreviewed, and their value is minimal.”

      This is the key point. I never look at proceedings, and I don’t think many people do either. The work hasn’t been seriously reviewed, and if it doesn’t rapidly appear in the literature, that is a sure sign that it wasn’t true in the first place. Perhaps it would be better to stop publishing these proceedings at all, at least under the name of the formal journals.

      1. Dan, in the plant sciences, proceedings are highly regarded and respected, at least international ones, and also those by societies. Not sure about other fields of study, but I would be surprised if the trend was different.

        1. I’m surprised by this, but other fields may have different cultures. Do people review proceedings submissions in the plant sciences?

          I mean, if I got a review request for an extended abstract or paper to be presented at a meeting (even a prestigious one) I would decline as not being worth the effort.

          1. Generally how this works is that the conference committee set up a review panel for that conference. I have experience the system both as an author and as a reviewer and it works well. The quality of reviews is typically more consistent than for journals – possibly because there is commitment in advance by the reviewers (though the reviews are often brief because the papers are brief).
            This was in cognitive science conference. Psychology conferences tend to be run by reviewing abstracts and the full proceedings aren’t published (but correspondingly less prestigious).

      2. This is a misconception commonly held by non-CS researchers (as in CS, this is patently false). In fact, some of the most highly regarded platforms in CS are conferences. For example IEEE S&P (not the magazine), or any of these. Acceptance rates in the 10-20 range are normal, at least three reviews occur (even for low quality conferences) and many of them these days employ a multi-phase review process.

        Most of these Springer articles are like published in the “Journal” lecture notes in computer science (Thomas Reuters’ index thinks this is a journal as well). Those publications are usually papers from workshops (sometimes revised ones) and are easily identified as such when reading them. This is also the main reason that relying on journal-related metrics is kind of hard in CS — LNCS makes up a very significant amount of the yearly journal publications, and most of that is essentially low quality.

        As (mostly) a reader, my experience is that while journal papers are often much more well-written and include more results, they’re not always better and are usually outdated by anywhere between 2 and 4 years — quite a long time for a field where many of the sub-fields are around a decade or so old. I’m hoping that will improve, but it seems to be a slow process.

    2. Sure. But in some unfortunate countryside universities, the proceedings do matter especially for junior people. What a friggin’ black eye for computer science. Sure, I’ve had my share of beer and laughs too, but this conference game needs to stop.

  5. “When flaws are detected by us, or brought to our attention by members of the scientific community, we aim to correct them transparently and as quickly as possible.”
    I cannot say that my experience with Springer supports this statement. I found substantial plagiarism in numerous papers of a certain anti-terrorism researcher, documented it fully and brought it to the attention to IEEE and Springer in May 2013. I did not bother about less reputable publishers in which this researcher also had published plagiarized papers. After less than a year IEEE had retracted eight papers: ( http://retractionwatch.com/2013/01/16/eight-papers-by-anti-terrorism-professor-retracted-for-plagiarism/ )
    Springer, however, only now published an “erratum” (not retracting the paper) for one paper that is 50% copied from elsewhere and 50% identical to a paper IEEE has already retracted ( http://retractionwatch.com/2014/01/10/a-retracted-retraction-backsies-for-an-anti-terrorism-paper/ http://de.vroniplag.wikia.com/wiki/Nm2 ). There are six more papers unretracted almost two years after I have provided Springer with full documentation the problems. One of them is a 100% duplication of an earlier paper … See http://de.vroniplag.wikia.com/wiki/Nm/Comparisons for details.

    1. A colleague of mine was asked to review a biology paper for a relatively prominent Springer journal a few years back. The results of the paper were nonsensical, so he contacted the authors to get their raw data (necessary, as this journal does not insist on complete archiving).

      While reviewing their data, it became quite clear that their samples had all been mislabelled. Unquestionably. This was what was driving the (nonsensical) pattern that the authors claimed to have discovered and was being claimed as being a unique and important finding. He contacted the authors, proving to them that their data was mislabelled. He got no response. He contacted the editor of the journal and got no response. He eventually recused himself of the review because the whole thing stank to high heaven. The paper was eventually accepted with the obviously mislabelled data and its obviously wrong conclusions. Despite efforts after publication to talk to the authors and the editor about this, it appears that no retraction is forthcoming.

      TL;DR. In the experience of most people I’ve talked to, Springer is awful.

      1. Springer (as all other publishers) does care ONLY about their profit, full stop.
        Editors do care ONLY about their income, full stop.
        Authors also do care ONLY about their income (mainly in form of public grants based on the number of their publications and NOT their quality/usefulness/applicability), full stop.
        Current academic publication system encourages all kinds of misconduct (plagiarism, duplication, data fabrication/manipulation/falsification, etc.) from the authors, to which the editors are VERY reluctant to respond (assuming that they are not actively acting as promoters of the fraudsters) and the publishers are even MORE RELUCTANT to fix the wrong-doing.
        The hard-working tax payers around the world are funding the feast of these parasites and honest researchers do suffer and are penalized, as they get less if any of the limited resources.
        The whole system is so-o-o-o-o-o-o rotten that it needs complete overhaul.
        IT’S TIME FOR A CHNAGE!
        See “Scientific Publishing Is Killing Science” in Pacific Standard (psmag.com) Feb 28, 2014.
        The possible alternative is offered by Richard Proce with Academia.edu

        1. Could not agree more!
          What’s wrong with some people, the change will inevitably come, why do they resist it?

  6. “The peer review system is the best system we have so far and this incident will lead to additional measures on the part of Springer to strengthen it.”

    Conference proceedings, journals, etc. Whatever. The problem here is peer review transparency, period. Readers of any article should to be made aware, ideally in the PDF, if A) the paper was edited B) if the paper was peer reviewed by X or at least X reviewers. I’d love it if the names of the editor and reviewers appeared on all papers, but at least explicitly claiming that the paper was edited and peer reviewed should be a must.

    With that said, the problem in this case could NOT have possibly been with the peer review system because these papers could NOT have possibly been reviewed by peers! (Unless we are considering the computer that generated them’s peer another computer!) Therefore, Springer is really bending the truth by including this statement. Clearly, they are not using the best system that “we” have at all for some things that they publish… and it’s not clear to the reader what is and what is not actually subject to review. Further, if computer generated garbage made it through their system, there is a high probability that human generated garbage has gotten through. They should address this transparently by marking all non-peer reviewed articles as such, which likely included every piece in every journal/proceeding that had a computer generated nonsense paper.

    1. “if computer generated garbage made it through their system, there is a high probability that human generated garbage has gotten through”

      I’d say that the fact that computer generated garbage made it through their system indicates that for sure the volume of human generated garbage that has gotten through is EXPONENTIALLY higher, even assuming no authors’ misconduct (plagiarism, duplication, data manipulation/fabrication, etc.).

      1. For a sample of 72 Springer articles, I showed that 83.3% shared an obvious Type-I error. Other publishers had similar percentages. Gatekeepers are incompetent. I documented this with the paper
        Paul Colin de Gloucester (2013): “Referees Often Miss Obvious Errors in Computer and Electronic Publications”, “Accountability in Research: Policies and Quality Assurance”, 20:3, 143-166,
        http://WWW.TandFonline.com/doi/abs/10.1080/08989621.2013.788379
        which is the first paper to cite the SCIgen exposé by Labbé and Labbé.

          1. How is it weak? Instead of avoiding debate please provide proof. This work is based on facts and I was motivated to perform it after I encountered seriously buggy code as I explained with this paper.

  7. 2,200 journals, eh? Maybe Springer should publish fewer journals if they can’t properly handle the current load.

    1. Not only these numbers, but in recent months, I have noticed that the Springer proofs department, now based in India (Chennai), introduyce, without fail, errors into every single proof I have received thus far. When the operation was still taking place until about 2011 from Germany, or The Netherlands, quality of proofing was better, and more consistent. Now, it’s a race to pump numbers. But, allow me to introduce the second elephant into the room, Elsevier Ltd.

      I wish to share ONE of my case studies to raise a red flag that there are serious problems and that one day, if papers are retracted because of errors, that they might not be caused by the authors. I edited the content simply for spelling and to remove the CC contacts.

      Subject: Elsevier complaint: Corrections list: Proofs of [JPLPH_51724]
      From:
      Date: Fri, May 31, 2013 10:38 am
      To: “[email protected]
      ————————————————————————–

      Dear ??? (Elsevier proof department),
      Elsevier

      REF: Proofs of [JPLPH_51724]

      Attached please find a list of the proof edits for the paper to be published in Journal of Plant Physiology.

      I wish to indicate that we are extremely displeased to see that Elsevier introduced at least 30 errors into the proof which were correct in the original re-submission (R2) that we had made to this journal. This indicates a serious lapse in the quality control measures at Elsevier. If in fact we the authors had trusted the Elsevier proof team to actually report our data correctly and accurately, some extremely serious mistakes would have gone through to press, for example, ALL of the units in the figure legends of Figures 2-4 were converted from microM to mM. If in fact some scientist had detected this error and then had complained to Elsevier that our data was fake or fraudulent because the concentrations we were reporting did not work in the laboratory, then it would have been our responsibility, and such data could, in a worse case scenario, be the cause for a retraction. However, unlike most cases reported at Retraction Watch http://www.retractionwatch.com/, the errors in the literature was intentionally or erroneously introduced by the publisher, Elsevier.

      We should also indicate the figures with photos are of an absolutely unacceptable quality, with a blurred resolution. We do sincerely hope that the figures will be represented as high-resolution images as we submitted. Most importantly, Fig 6 was totally destroyed by the Elsevier proof team and looked, to use an appropriate term, disastrous. I have re-attached Fig 6 in the original Power Point file. Please do not convert any of our Power Point files to jpg or tiff files from the menu. Rather, convert to EMF files to retain 100% of the resolution.

      These two serious groups of errors amount to no less than a fraudulent representation of our data set.

      Questions that immediately come to mind are as follows:
      a) Who sent us this e-mail and why was our e-mail not signed by a human carrying responsibility?
      b) Who introduced these errors into our proof and who is the person responsible for this proof?
      c) How many other proofs in the past created by Elsevier have unknowingly been introducing errors into manuscripts without the knowledge of authors?
      d) How many papers and/or authors may be penalized because Elsevier has introduced errors into the scientific manuscripts?
      e) If the errors are being introduced by Elsevier, then why would the authors not be allowed to have the paper corrected after 48 hours?
      f) Does Elsevier consider 48 hours to be a reasonable amount of time to complete proof edits, especially for transnational research teams? What is the real purpose of introducing such unrealistic deadlines?

      In addition to fully correcting our proof using the list attached, including the use of high quality, high resolution figures, we also expect the Elsevier proof team to review the original file and to ENSURE that no additional errors were introduced and that no sentences were edited without the approval of the authors.

      Finally, we would like to receive a formal and detailed explanation from management in Germany and India regarding how such mismanagement of professional data has been allowed to take place.

      To ensure that the correct Elsevier authorities receive this complaint, I have taken the liberty of contacting Elsevier author services (CC) as well as the Editor-in-Chief (CC). Since this grave error borders on a predatory nature, I have also contacted the managers of Retraction Watch and scholarlyoa.com (BCC) to formalize the complaint and to make this problem within the public scientific arena. Unfortunately, when errors are observed in papers, often the authors are demonized, but in fact main-stream publishers are on many occasions equally responsible for some infractions such as this one. The latter, however, is not widely publicized while the former is.

      We look forward to seeing our important results published in Journal of Plant Physiology.

      Sincerely,

      Jaime A. Teixeira da Silva

      PS: To be extremely honest, I am now worried how many errors were introduced by Elsevier into past papers of my own.

      From: “Albrecht, Alrun (ELS-MUN)”
      To:
      Sent: Tuesday, June 4, 2013 12:39 AM
      Subject: AW: [Fwd: Elsevier complaint: Corrections list: Proofs of [JPLPH_51724]]

      Dear Professor Jaime A. Teixeira da Silva,

      Please accept my apologize for the not acceptable quality of the page proofs.

      The production department has already informed our supplier management and the colleagues will check with the typesetter the reasons.

      Thank you for checking the page proofs very carefully and marking all the necessary changes. Unfortunately you didn’t use the common annotate function inside the PDF. This is more clear for understanding than the used word file. Nevertheless you will get an revised version for approval before print and online publication.

      The page proofs will send out via a computer system automatically as soon as the typesetter will be ready. This is the reason why no personal signature was under the e-mail. Furthermore a general e-mail-address is used to get all answers at the right in-box without delay also when colleagues are out of office. Elsevier is managing more than 5000 corrections per day, an automatic workflow is necessary for this.

      Author will get a page proof of their article to check if all texts and figures are correct. This is necessary because during conversion of the text from normal word file to a text file usable for print some problems could happened. Like in the case of your manuscript special symbols as µ could disappear. We still wait for the author approval and without we didn’t start the publication process.

      The PDF provide for corrections does not contain the high resolution figures provided by the authors. We use in general low resolution figure to reduce the file size. This is necessary because not all e-mail-systems support the sending of large files. Off course the high resolution files will use for print and also for publication online.

      I’m very disappointed about any inconvenient cause in the work of our typesetter. Management and colleagues will discuss this in detail and we will do our best to avoid this in future.

      Sincerely yours
      Alrun Albrecht

      Mrs. Dr. Alrun Albrecht
      Senior Publishing Editor
      Journals Department
      Elsevier GmbH
      Homeoffice Jena
      Gillestraße 6a
      07743 Jena
      Germany

      1. For a sample of 162 Elsevier articles, I showed that 85.2% shared an obvious Type-I error. Other publishers had similar percentages. Gatekeepers are incompetent. I documented this with the paper
        Paul Colin de Gloucester (2013): “Referees Often Miss Obvious Errors in Computer and Electronic Publications”, “Accountability in Research: Policies and Quality Assurance”, 20:3, 143-166,
        http://WWW.TandFonline.com/doi/abs/10.1080/08989621.2013.788379
        which is the first paper to cite the SCIgen exposé by Labbé and Labbé. Coauthors (and Elsevier and referees and editors) are not without blame for this shared Type-I error.

  8. ” We have also reached out to Dr. Labbé for advice and ffcollaboration on how to go about this…”. I agree, they need double f collaboration urgently!

    1. I documented with
      Paul Colin de Gloucester (2013): “Referees Often Miss Obvious Errors in Computer and Electronic Publications”, “Accountability in Research: Policies and Quality Assurance”, 20:3, 143-166,
      http://WWW.TandFonline.com/doi/abs/10.1080/08989621.2013.788379
      : “Labbé and Labbé (2013) conjectured that a referee might not be
      aware of the topic of a Type-I error which he or she accepts. On the contrary,
      every primary degree in each of the disciplines computer science; software engineering;
      and information systems awarded in or after the 1990s prominently
      featured mentions of object orientation in compulsory lectures, and many of
      those degrees also required practical coursework in C++. As shown in the
      present article, such widespread and intense exposure to these topics did not
      prevent hundreds of Type-I errors about them.
      [. . .]
      [. . .] Labbé and Labbé
      (2013) interpreted accepting nonsense for publication as being according to the
      Dr.-Fox Phenomenon (Newton, 2010).”

  9. “we aim to correct them transparently and as quickly as possible.” — yeah yeah, thats is the drill. However it is quite far from the truth from my firsthand experience. Ignoring published issues and trying to expose the exposer is the common practice among many Springer editors.

    1. CR, it is now time to turn the tables and expose the weaknesses of the editors, including the public release of flawed peer reviewer reports and biased and unprofessional comments by editors or editors-in-chief. It is time to show, wherever appropriate, the issues that are not being handled professionally, or scientifically. One thing that few can run away from, are facts. So, a statistical analysis that clearly is incorrectly represented, a figure that is duplicated, a set of data that was already published, text that is clearly plagiarized, word for word, and similar hard-core issues that not even editors or publishers can deny, are what we need to expose, piece by piece, here at RW and elsewhere. RW and sites like PubPeer are excellent because they are public records in a way. But it is also important to try and get these case studies published. It is difficult to claim libel if you can prove that the sole objective is to show the errors, factually, clinically, and publically, with the ultimate objective of correcting the literature. Simple. Authors must be held accountable. After all, they wrote and submitted the work. But editors must also be held accountable because they apparently checked and approved the work. And publishers must be held accountable because they believed the editors they hired and because they took money for selling flawed work. “Unfortunately, scientific publishing is not immune to fraud and mistakes.” Yes, this is a universal truth and we now see that predatory open access journals, as listed by Jeffrey Beall, and top level publishers, like Springer, are equally fallible and may even have the same level of problems and be subjected to the same levels of abuse. However, what will one day differentiate predatory from non-predatory will be in how they actually handle the situations, and resolve them, and how they will correct the literature. That is why they must ALL be closely scrutinized. Editor board by editor board, journal by journal, paper by paper, paragraph by paragraph, figure by figure. This will be an excruciatingly painful process, as I have personally come to learn, but you will find so many problems, as I have also now come to learn about the plant science-related journals published by Springer (and not only). You get no pats on the back. You get no thanks from Springer when you report an error. You get constantly irritated editors that are arrogant and who always try to avoid the issues, rather than addressing them. This is not responsible and truthful or accountable behavior. This is not academic behavior. This is pseudo-academic behavior to be able to claim “peer” review when in fact a deep scrutiny of each and every paper has not taken place. As scientists, we empathize with Springer, but don’t just whine about the situation. Fix it. We are providing the FREE intellectual analysis of your literature, as post-publication peer review. This goes beyond what your current editor boards have been able to achieve in their limited capacity. Be grateful for our analyses. Please have the decency of respecting these analyses, taking them into consideration, and acting upon them. Every error that is published is a factual case study. And every error must thus be publically displayed. This is now the new and urgent responsibility by scientists to identify the problems. The new responsibility of editors is to acknowledge them. And the new responsibility by publishers is to correct the literature. Either that, or get exposed, or boycotted.

      1. Today I have decided simply to ignore editors as they ignore us. We can expose flawed publications with PubPeer and comment on those on our papers. I think the next step is to likewise ignore publishers, as we could very well publish our works online without them. I am afraid, however, that these all depend on greater participation of colleagues.

        Your initiative of exposing the flaws of publishers and editors is a venerable one, but you must rethink what the objectives are. In an ideal world you would expect them to correct themselves and serve scientists and take small profit and work much. However, they will not do this, and this is why they will ignore and sue us so we continue serving them. What makes also their behavior even the more aggressive is the crude fact that in the 21st century we do not need publishers to publish our works, but people have not yet fully realized this fact. I think we need to start ignoring them all and making science only for scientists and those who value scientists. I vote for publishing in open forums with set standard pdf formats, and let post-publication peer review select what needs to be taken into consideration. No editors no publishers. I dream of the day this will come, which is inevitable but is taking awfully long.

        1. CR, from my experience, we cannot wait for that dream day. Some will need to suffer to make that dream true, too. Exposure is important, and must take place independent of the risks. Trust me, I know. I have suffered alot in the past 4 years at least when my identity was attached to my complaints. I have become, I believe, public enemy No. 1 in my field of study. I think my peers cringe when they receive my e-mails now. The purpose must always be noble, to correct the literature, or to correct the system. In order to correct the system, we must remove the bad apples. The peer pool cannot appreciate that a high-level peer is a bad apple until the rot has been exposed. YouTube, blogs, PubPeer, any medium is just the first step. The second most important step is ensuring that the information reaches the inbox of that peer pool. An effort is only half an effort if the value of the information it contains is buried deep under the billions of gigabytes of information being published daily. Self-publishing is one resort, but the publisher-controlled system is so ingrained into the very genomes of scientists nowadays, that they cannot appreciate the liberating feeling of self-publishing. Self-publishing also takes a fair amount of knowledge about issues that are not related to science, including copyright, web-design or internet-related techniques, security, including cyber-security, and other issues which most scientists couldn’t be bothered about. At the end of the day, scientists prefer to swallow the ridiculous decisions (in many cases) by “peers” in “peer review” that opt for the more liberating approach.

          1. I still think that hoping editors and publishers will act against published papers and productive authors (especially popular ones) is swimming upstream and expected to cost blood to the exposer and usually little bother to the exposed. Also I feel that the website companies hosting scientific materials already have all necessary security and design systems as illustrated by the many archiving websites. Copyright (=exclusive exploitation of content) is an issue inherent to publishers and thus would have to be rediscussed in a system in which papers are not sold for profit, and bringing up this point only illustrates how scientists are still thinking within “published magazine” lines. Just as a basic example, there are free programs available that enable you and any scientist to write papers directly in a set .pdf format, and still most scientists think a human publisher is necessary for this conversion — ever wondered why publishers not only omit this fact yet never suggest this option to authors, i.e. “to speed up” their submission process?

            I am sure the natural tendency is to open direct online archiving of scientific papers, and that scientists can very well constantly curate and revise the available papers as part of their normal job. My issue is how it will take for the scientific community realise that this rational change in the system is necessary and that the “publisher + peer review + mediating editor” system is now not only outdated and slow, but highly harmful to the progress of science.

        2. Totally agree with you! Bravo!
          This is The BEST post I have seen on Retraction Watch!
          In 21st century there is no need for editors and/or publishers.
          The change is happening already: see “Scientific Publishing Is Killing Science” in Pacific Standard (psmag.com) Feb 28, 2014. One solution is Academia.edu

          1. I really wish it was that simple. I and most of my colleagues with me already have too little time to read everything that is published in our fields. I need some reasonable way of narrowing down what to read. So far at least, some publishers do a reasonable job at this (typically those that do not publish for profit, such as the IACR and the ACM (and to a lesser extent, the IEEE). I’ve tried to work with academia.edu, researchgate, arxiv and so on, but there is simply too much. We need some sane way to weed out the worst of the junk, and peer review is the best of bad options. PubPeer might provide a decent solution in the future.

          2. “peer review is the best of bad options” — I also think exactly the same. I wish real scientists would all say NO to publishers, and start publishing their own finds in a standard simple format by their own online, and let the scientific community (anonymously or not) judge their data. I really do not know why I need an editor to intermediate communication with my peers (frequently in my field that I have recommended) and then to put the paper up online, and pay some publisher for this.
            “I need some reasonable way of narrowing down what to read.” — me too! However it seems publishers are not doing this well, as there is a lot of crap, often fascinating/sexy but still fairytales, being published in “traditional” journals, and really good stuff, albeit less sexy, coming out in small journals. Pubpeer works better for me than impact factor, I am sure!..

            I feel the more we rely on editors to solve science issues for us, the more time and money we lose. They are clearly useless to me.

          3. I think a good step would probably be the publication of the reviews as attachments to a published work. That’d bring us the good things of peer review — basic sanity checks, readability tests, some plagiarism detection, and so on — without the (at least perceived) problem that the process is not at all transparent. It also means that the community can still have “good” papers in the same location (in my field at least, it is well-understood which conferences and journals are ‘good’), which saves scientists like you and me the time scouring the web for potentially relevant related work.

            I’m very hesitant to assume the latter is really the right model for the future, because there are so many poorly written documents out there already (standards, project deliverables, technical reports, many workshop papers and even some conference proceedings). I feel that doing research in a particular field requires an understanding of a good chunk of the literature in that field, which is a hard thing to satisfy if the amount of publications goes up even further. It’s already possible to publish “whitepapers”/TRs on an institute website or at a workshop — the problem is that noone will read or cite it. Until we solve the issue of filtered dissemination, I think the current model is not as bad as it is sometimes made out to be.

            As an aside — you mention a very good point: Data. For me at least, data (and code) is a different discussion. There can be papers without data (theoretical work, maths, cryptography), but data by itself (without analysis or expertise) is not meaningful. In an ideal world all papers would include both the associated dataset as well as all used software for swift reproduction of results; in the real world, there is industry funding, which means this basically isn’t going to happen.

          4. The Association for Computing Machinery and the IEEE do not care. For a sample of 58 IEEE articles, I showed that 94.8% shared an obvious Type-I error. For a sample of 5 ACM articles, I showed that 40% shared an obvious Type-I error. Gatekeepers are incompetent. I documented this with the paper
            Paul Colin de Gloucester (2013): “Referees Often Miss Obvious Errors in Computer and Electronic Publications”, “Accountability in Research: Policies and Quality Assurance”, 20:3, 143-166,
            http://WWW.TandFonline.com/doi/abs/10.1080/08989621.2013.788379
            which is the first paper to cite the SCIgen exposé by Labbé and Labbé.

          5. I don’t think you can make that claim on the basis of a random selection of IEEE papers. The way the IEEE (and ACM) works for workshop and conference proceedings, as you probably know, is that the organizers of a conference operate more-or-less independently, and the IEEE provides “technical co-sponsorship”, which is a fancy word for saying the conference gets to use the name. I agree that “IEEE” is not a mark for quality, but it’s still a lot better than a random selection of PDFs from the web.

            (off-topic, but I find it ironic that you use C++ (not) being object oriented as an example, because Stroustrup says it is actually object oriented — but more than that. I’ve never met a programmer that said C++ wasn’t OO, so I was very surprised at first, but I suppose one could argue C++ isn’t OO in the way e.g. Java is. I’d like to do a study similar to yours that focusses on a specific field and including all the relevant work within a specific year — I expect the results may be similar to yours, though not quite as extreme).

          6. The IEEE does not care. All the IEEE wants is money which it acquires by acquiring intellectual properties from others for gratis (actually less than gratis when it is paid page charges) and charging for potential access to these properties.

            For example, Ken Campbell emailed with Subject “RE: [vhdl-200x] Code sharing” on 1st June 2012 to a VHDL-standardization email list:
            “Disconnect all this effort from the working group.

            Start up a new reflector.

            Disconnect effort this from IEEE, take it to a public group.

            I would be willing to contribute, but not if there is even a slim chance
            the work could get claimed by an agency like IEEE.

            Ken

            > Hi Jim,
            > I do not think this has anything to do with eda.org since that is an
            > independent organization that has no formal association with the IEEEE.
            > The issue is what constitutes a work product of the working group and what
            > is done by an independent group.
            >
            > I wish you luck on this. The last time I tried to fight this battle there
            > was not much room to negotiate with the IEEE.
            >
            > Regards
            > [. . .]
            [. . .]
            >> Hi Jim,
            >>
            >> As you know, the IEEE owns the copyright to anything developed in the
            >> WG.
            > > The old packages have the IEEE copyright statement in them. Likewise,
            > > IEEE would own the copyright to new packages if they are
            >> developed by the WG.
            >>
            >> Regards,
            >> Joan [Joan Woolery, an employee of the IEEE, who contributes nothing to standardizing VHDL]”

            I submitted a previous version of “Referees Often Miss Obvious Errors in Computer and Electronic Publications” to “IEEE Transactions on Software Engineering”. This version was called “Code Cloning across Paradigms” with this as its abstract:
            “Replication of functionality unnecessarily enacted by copying and pasting has been observed in many codes boasted of in the literature. Problems with cloning exposed in this paper are not restricted to a small set of paradigms (cloning in Fortress; Prolog; Lisp (six dialects); C++; Fortran (more than four dialects); Smalltalk (more than thirty dialects); and more than fifteen other distinct languages is exposed, some in this paper itself and some as supplemental material in the Digital Library). Instead of merely being a case of inexpressive languages, defects in unmaintainable clones result from programmer inertia when noticed or lack of programmer skill when unnoticed.”
            A self-contradictory rejection ensued:
            “The paper immediately launches into a detailed discussion of
            maintenance problems associated with code cloning as observed
            in other papers, without going into any detail.”

            I emailed [email protected] for Managing Editor James Calder this proposal which he ignored:
            “Proposed paper on programming problems including code clones
            Dear Dr. Calder:

            I have a proposal for a paper for the “Proceedings of the IEEE”.

            I have noticed that in many fields, engineering of source code is not fully achieved. When one tries to apply a tool from a domain which one does not specialize in, then naturally the results are not comparable to an expert’s.

            Two popular source bases mentioned in IEEE publications exhibit not managing to match software engineers. One is for electronics, the other for physics. They are the reference implementation of the SystemC® standard (the code is actually published by the IEEE instead of merely being mentioned by the IEEE); and Geant4 (J. Allison, et al., “Geant4 developments and applications”, “Nuclear Science, IEEE Transactions on”, vol. 53, no. 1, 270 to 278, 2006).

            I have detected numerous flaws in these source bases, and in many codes related to them. I have noticed SystemC® (and related) examples of problems in ArchC; FastSysC; Metropolis; the OCCN (the On-Chip Communication Network (OCCN); the OCP (Open Core Protocol); ODETTE; ReSP; and SoCLib. I have also noticed problems in codes related to Geant4, including Cosima and
            http://REAT.Space.Qinetiq.com/xsbias/files/xsbiasing_gras_2007321.tz

            Of course, I have also noticed problems in other codes (such as MCNP, an alternative to Geant).

            An outline of the sections is:
            Introduction
            Code cloning
            Against and for
            Mistakes involving cloning
            Rampant code cloning is not restricted to copy-and-paste
            Conditional branches often exhibit cloning
            Introduction to SystemC® society
            C++, especially Geant code and SystemC® code
            OCCN (and other codes linked with SystemC® code)
            Geant
            Code cloning in C++
            Correctly perceived need to exploit similarities left unexploited
            Geant code
            Comments (even when mistaken) are not noise
            Geant code
            Cloning across conditional branches
            SystemC® code
            A bug from failed cloning which existed for greater than nine years
            Geant code
            Code cloning is a symptom of greater than one hundred bugs
            Geant code
            The Gang of Four
            Singletons
            Manual copying
            Geant code
            SystemC® code
            Not bothering to overload
            Geant code
            Whitespace
            SystemC® code
            Independence where dependence is intrinsic
            Geant code
            SystemC® code
            A bug introduced via code cloning
            Magic numbers
            Geant code
            SystemC® code
            Magic numbers in other code bases in Crap Pooh Pooh
            Cloning in other code bases in Crap Pooh Pooh
            Counterfeit classes
            Geant code
            SystemC® code
            Unused parameters
            Geant code
            SystemC® code
            Lack of familiarity with what C++ already provides
            SystemC® code
            Lack of portability
            SystemC® code
            Poor pseudoC++ code compared to poor Smalltalk code
            Geant code
            People who misname C++ as “C”
            SystemC® code
            Disobeying OSCI licensing terms
            Downgrading from FORTRAN to C++
            Other languages
            Dynamically Linked Libraries of Microsoft Windows
            Pseudo-code
            Negative numbers
            TeX
            Text
            Keyboard configurations
            Electrical harnesses
            Counterfeit classes and emulated enumerations
            Forgetting learnt skills but still deploying them somewhat
            Conclusion

            The type of coverage would involve showing real but not inevitable bugs.

            I discovered most of the problems mentioned in the paper, including more than 100 bugs in Geant4. I played a very minor role in standardizing VHDL. At various stages of my career I have been a computer scientist; an electronic engineer; and a physicist. I am currently a physicist.

            Yours sincerely,
            Nicholas Collin Paul de Glouceſter”

            After getting no answer from James Calder after a long time, I submitted “Code Cloning in Electronics” to “IEEE Transactions on Industrial Electronics” with this abstract:
            “Hardware designers increasingly draw inspiration from software. Unfortunately, they often repeat abandoned mistakes. Various C++ problems (including bugs) affecting hardware are revealed in this paper. Here case studies of electronic and networking codes resulting from C++, including ArchC; FastSysC; Metropolis; the OCCN (the On-Chip Communication Network (OCCN) ( https://SourceForge.net/project/showfiles.php?group_id=74058 ); the OCP (Open Core Protocol); ODETTE; the reference implementation of the OSCI; ReSP; and SoCLib, are presented. Many different parts of codes mentioned in this paper are almost identical. Almost identical fragments are so-called “code clones”. Maintenance problems ensue because when one part of a code is updated another similar part can be accidentally overlooked.”
            The submission website of “IEEE Transactions on Industrial Electronics” demanded to demonstrate that submissions manipulate its impact factor. I did not manipulate its impact factor and this submission was rejected with:
            “We use to review paper that comprises of [. . .] the most important of all, does the paper have the potential to earn a high citation.”
            Furthermore this submission was rejected with:
            “The opinion of the reviewers and Associate Editor in charge, the submitted paper is not suitable for publication in the IEEE Transaction on Industrial Electronics”
            despite
            “we do not have reviewers in the field that can provide a proper and fair review”.

            I submitted “Cloning in Networks” to “IEEE Communications Surveys & Tutorials” with this abstract:
            “This tutorial was prompted by noticing code clones in networking software. Almost identical fragments of source code are so-called “code clones”. Maintenance problems ensue because when one part of a code is updated another similar part can be accidentally overlooked. This can result in bugs. Instead of contrived toy examples concocted to prove a point, all the examples in this article are from real systems, including: FastSysC; Metropolis; the OCCN (the On-Chip Communication Network (OCCN) ( https://SourceForge.net/project/showfiles.php?group_id=74058 ); the OCP (Open Core Protocol); Smalltalk; the reference implementation of the OSCI; ReSP; and SoCLib.”
            Disproving claims of the IEEE is outside the scopes of IEEE journals therefore “Cloning in Networks” was rejected.

  10. Re the other nonsense papers in the Springer and IEEE lists, do they contsain citations of other papers? Could these papers be being used to push up ‘ your citation ranking’, a bit like those companies that offer to improve your Google ranking .

    1. An interesting case study was published by Beall on his blog, criticizing a Springer journal editor for ignoring plagiarism: http://scholarlyoa.com/2014/03/06/is-the-editor-of-the-springer-journal-scientometrics-indifferent-to-plagiarism/
      The original paper can be found here: http://link.springer.com/article/10.1007%2Fs11192-013-1130-5
      So, this poses an interesting conundrum. If Springer claims to enforce its COPE- and ICMJE-based publishing ethics, and it is abundantly clear that this editor thinks that plagiarism is acceptable because it copies text, without quotation marks, but does reference the source, then surely this puts Springer in a quagmire.
      I should add a small criticism of Beall. His blog is supposed to focus exclusively on open access “predatory” journals, yet he is now focusing on plagiarism. Although, he should be pattae don the back for finally calling out Springer.

      1. In recent months, I have noticed an irritatingly bad pattern by Springer. When a paper is (unfortunately) rejected by a Springer journal, in some cases, for really stupid, illogical or scientifically unsubstantiated reasons, an automatic message appears related to SpringerPlus. I poaste a verbatim e-mail below, but edited simply to remove the personal details, the name of the journal and editor and the title of the manuscript. I can, however, confirm, that this paper recycling, is undoubtedly a real cash-cow for Springer. I wonder what BMC, which Springer Science + Bsiness Media owns, would say about this reject-then-resubmit-immediately to SpringerPlus for a chance at acceptance with a juicy fee, would say about the moral nature of this automatic transfer system…

        “Dear Dr. XYZ,
        Thank you for submitting your manuscript to ABC. I regret to inform you that ABC is unable to accept your manuscript for publication. [edited out]. It is journal policy not to accept such papers but recommend that they be submitted to an agronomy journal, of which there are many. We hope that this, otherwise interesting research will thus find its way into the literature.

        However, I believe that your manuscript is very well suited for the journal SpringerPlus and I would like to advise you to transfer your manuscript there.

        SpringerPlus is an Open Access journal which accepts manuscripts in all disciplines of science, technology, engineering, humanities and medicine. The journal has an all-inclusive scope; it publishes all manuscripts judged to be scientifically sound by reviewers. SpringerPlus will not reject a manuscript because it is out of scope or for its perceived importance or ability to attract citations. SpringerPlus will either accept your manuscript for publication or not, you will not be asked for additional research. Please note that you do not have to do reformatting of any kind. You can find more information about the journal at http://www.springerplus.com.

        Open Access

        SpringerPlus charges a one-off payment (article-processing fee) to cover all editorial costs and fund the Open Access publication of the articles it publishes. All articles in SpringerPlus are freely downloadable for anyone, no subscription is required and copyright remains with the authors: articles can be used without any
        restrictions. If your institution is a SpringerOpen/BMC member, or if you work in a country listed here, you may be entitled to a discount or full waiver of this APC. For more information please visit our website or contact your librarian or funding agency.

        If you agree to transfer your submission to SpringerPlus, please click here: [edited out]
        This offer is valid until 09 May 2014.

        Upon receipt of your approval, the SpringerPlus editorial office staff will transfer your manuscript files across for you. Please note that you will have an opportunity to update or revise the manuscript before final submission to the editorial board, and that we will not transfer your manuscript without your approval.

        If you have any questions, please visit the journal website or contact our editorial team at [email protected]

        With kind regards,

        BBB
        Editor in Chief
        ABC”

        Sometimes, I wonder if we are dealing with science publishng, or a fish market.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.