About these ads

Retraction Watch

Tracking retractions as a window into the scientific process

Weekend reads: Stem cell researchers falsifying data, neuroscience research forgets statistics tests

with 17 comments

booksAnother busy week at Retraction Watch. Here’s some of what was happening elsewhere on the web:

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

About these ads

Written by Ivan Oransky

March 29, 2014 at 9:34 am

17 Responses

Subscribe to comments with RSS.

  1. I would like to comment on the last post regarding threats of libel. This is a matter for some concern for me, a whistle blower, and other private individuals, not the least for which legal fees can be very hard on the pocketbook. The defamation lawyer that I consulted at one time charged $500/hour. But I did learn that there is another way out, although I don’t know how effective it is. If one has homeowner’s insurance, additional coverage can be added for “personal injury” which only costs about $25. This is not the same as “Personal Liability” and it also requires that one have an umbrella policy — mine at the present time is for $2,000,000 and cost about $600 annually.

    hzhill

    March 29, 2014 at 10:28 am

    • So, a home insurance company would cover lawyer expenses in a defamation case against you that has nothing to do with your home? Sounds a little bizarre. Or is it an add-on they allow you to have just to be able to sell you another product?….

      Jerry Lofti

      March 29, 2014 at 12:56 pm

      • Our umbrella policy is with RLI Insurance. They explain on their website the kind of things they will do which is basically defend you if you are sued — for anything? I’m not sure.

        hzhill

        April 16, 2014 at 5:10 pm

  2. With regard to multi-level analyses…That is another fad, pushed by people who would be hired as consultants to carry out such analyses (we all know statisticians who contribute very little conceptually, but nonetheless want to be on your grants and your paper). In most cases, doing a multi-level analysis does not change the results at all. That’s the thing, science is so fragmented into a bunch of small lobbying groups. They all launch minifads at some point in time or another to gain some benefit. And the media falls for it each single time.

    Jerry Lofti

    March 29, 2014 at 1:01 pm

    • In plant science, 100% of papers I submitted in 2013 to moderate-level IF journals in the plant sciences had a really bizarre rationale about negative results. Thus, there was 100% rejections in all cases based on an overwhelming scrutiny of the negative results at the expense of the positive ones, as if science conducted in the laboratory only results in positive results! In one study in particular, which involved almost one year of laboratory effort several years ago, there were ZERO positive results, but the level and breadth of treatments would provide would-be researchers at least one perspective of what would NOT work. Understandably, many of the very same journals, primarily published by Elsevier and Springer, continue to operate as traditional print publishers, and are thus also limited as to how much they can publish, which seems ironic since the amount of science conducted would be increasing, not decreasing. There is definitely a culture of what is “popular” trumping what is “novel” in the “elite” plant science journals. I have found quite a refreshing attitude at The All Results Journals, and am now testing that journal for fairness towards the importance of negative results. But why do we have to approach “fringe” journals to publish our work and get a culture of appreciation of the inherent value of what we have done?

      In 2013, I submitted about 5-10 reviews, all unique (i.e., no such reviews exist in the plant science literature yet) and one would think that such topics would be openly welcomed for their top-tier originality. Such reviews take months to write, and often involve multiple collaborations to ensure the highest level of scientific quality, accuracy and rigor. Instead, what I have found is this incredibly nonsensical attitude across many top-tier plant science journals that rejects because it is “out of scope” (which makes no sense how a biotechnology review could be out of scope in a plant biotechnology journal), or “not original enough” (when in fact no such review exists). Hopefully, in 2014, I will compile a list of the most “ridiculous excuses” I have received from editors and journals, who are clearly biased towards my research because of my open criticisms of Elsevier and Springer, and their editor boards, when rejecting manuscripts. Rejections are an integral part of the publishing landscape, and when they are made based on illogical, non-sensical, biased, or ridiculous excuses, then the editors deserve to be exposed (and thus the journal and publisher’s mismanagement). I still continue to perceive this strong anti-author, or anti-science, feeling when I read many comments at RW and on other blogs that goes beyond a plain critique of misconduct. And I am of the opinion that alot of the frustration felt by scientists, at least those that are heavily invested in publishing, is that the system is broken, if not corrupted, at least in plant science. Either that, or you’ve got a good buddy on the editor board who would “facilitate” a smoother and more friendly peer review.

      To give you a great example of what has come to characterize the top of the plant science publishing echelon, next follows a part of a rejection I received 3 days ago about a review on the genetic transformation of an orchid genus submitted to Journal of Biotechnology (Elsevier): “the biotechnological relevance of [orchid] is quite limited since it is mainly used as ornamental plant.” (Alfred Pühler, Chief Editor) Considering that Dr. Pühler is a microbiologist, one could see his attitude about a lowly orchid genus for which no such review exists. Unfortunately, his remark also reveals his deep ignorance about the medical aspects of this particular orchid genus, and his equal ignorance about the literature. It also reveals how many journals, such as this one, are only chasing “popular” topics that will be highly referenced, and thus cited, and thus leading, in the mid- to long-term, a boosted impact factor. These illogical decisions only further emphasize how the impact factor is being gamed, even by the gate-keepers to apparent academic quality, and novelty. How can ONE individual determine the fate of so much collective intellectual investment, amounting to thousands of hours of work, over several months? It’s just astonishing how an e-mail a few lines long, and in some cases a robotized version sent to many submissions, can underscore the great human and scientific effort made in so many instances. The culture of originality, depth of perception and scientific accuracy, fundamental aspects of science and science exploration, are being threatened by this type of impact factor, “popularity” culture.

      Unreasonable critics would probably say “get over it, resubmit and find a journal that can appreciate your work”, which one ultimately has to do because it is impossible to penetrate past this first layer of ignorance and narrow-mindedness in the editor boards of supposedly respected journals, or one can say, it is time to expose the “establishment”. The time has arrived when we can no longer tolerate this level of ignorance by so-called leaders who are (unfortunately) also the gate-keepers of journals whose pages are filled with errors and whose publishing managers continually ignore the importance of post-publication peer review, originality, and logical discourse. What other choice are we (the scientists) left with, unless we want to live the rest of our days as brain-washed scientists being insulted by these sorts of individuals who have big positions, big salaries, big power but tiny perspectives on the world of science?

      I am rapidly concluding that the erosion of science values, in research and in publishing, is taking place not only from the bottom-end feeders that characterize so many “predatory” publishers, and predatory scientists, but also by this horrific arrogant blindness at the top. Very unfortunately, the plant science community is still so very narrow-minded into thinking that in order to gain respect, and admiration, that papers must be restricted to the top-tier (aka, high IF) journals. This top-down perception is damaging plant science’s image and will come back to bite its own tail in a very serious way, quite soon. I believe. The only way to achieve justice, not through kangaroo courts, but through exposure of all perspectives on both sides of the isle, is for the “Joe the Plumbers” of plant science, like me, to come forward and tell our stories, and share of our experiences, in black and white so that the youth may be able to get a (more) balanced perspective as they move forward in their scientific careers.

      Jaime A. Teixeira da Silva

      March 29, 2014 at 2:21 pm

      • “In 2013, I submitted about 5-10 reviews, all unique”. Was it 5 or 10? Do you actually get any work done besides writing unique reviews? Writing 10 unique reviews in one year is a full-time job….

        Jerry Lofti

        March 30, 2014 at 7:48 am

        • Jerry, exactly. I am retired, so this is now my full-time unremunerated “job”. I consider it to be a pleasant hobby, of sorts. I don’t think it is correct to sit on a blog and critique others while ignoring my own efforts or challenges in publishing. RW allows for introspection, and self-analysis and auto-critique while looking outward for answers. I have always believed that those who are not active, or who have no experience, cannot comment. So, I stay active in the publishing world. As alluded to by a Portuguese epic writer, Luis de Camões (http://en.wikipedia.org/wiki/Lu%C3%ADs_de_Cam%C3%B5es), who once wrote, in the Lusíadas, “Numa mão a pena, noutra a espada” (translated as “in one hand a feather, in the other, a sword”). Of course, this is much to the ire of many of my peers in plant science, many of whom consider me public enemy No. 1 because of my open, unrestricted critiques. After a decade or two of work, alot tends to accumulate, so early retirement is a good way to unclog the system. That is why retractions are so important, because the risks are higher now, and the dangers seem so much more real. At least much more than they were 5 or 10 years ago (my personal perception). The dangers have become much more real in 2-3 years. And that is why I am not afraid of critiquing the establishment, editors, colleagues, or “predatory” entities, because without open, frank and public critique, science cannot advance. I can appreciate that my situation is quite unique, in many respects, and that most of my colleagues, or professionals I know are not free of associations, either to an institute, to a job, a position, a journal or publisher, an editor board, or some or other level of official responsibility. So, they fear. Fear of repercussions is the biggest hindering factor in achieving wide-scale post-publication peer review across all plant science literature, but I am moving forward to expand the message of the urgent need to achieve this. I understand that alot of scientists will take hits, personal and professional, but to think that science or publishing are activities free of risk, is ridiculously naive. Retractions will become as natural as publishing in a few years from now, and at that time, they will hopefully reflect a corrective measure only against misconduct. But at the moment, the already published literature – at least in the plant sciences – has not been explored thoroughly, examined and critiqued by independents, only by “peer” insiders and a limited “intellectual structure” that constitutes traditional peer review. Many editors that I confront, head-on, without reservations, are extremely resistant. Resistant to conversation, resistant to critique. Resistant to change. Resistant to correcting the official academic record and literature. Their arrogant pride in most instances is what is preventing a fair, open and logical discourse about the poor or corrupted state of plant science publishing. And these individuals must be exposed, no matter who they are, how high their ivory tower is, or what institute they work for or represent. Therefore, while trying to build my own literature, and publish my own works, in whatever modest form they may take (certainly not Science or Nature level work), there is a deep and intrinsic responsibility to examine the literature and to state, openly, what is wrong with it. Most importantly, the authors, editors and publishers who approved the publication of tainted science, must be held accountable. This is the biggest challenge and does not necessarily have to involve a retraction, but it must involve some form of open access, publically available record of the problems, what was wrong, and how the problems were solved. At the moment, I would say that in plant science, there is literally a ZERO culture that embraces this ideology and policies. The current retractions we see, mainly in Elsevier and Springer journals, are little “band aids” that are made by these corporate publishers to save their own image, and to give the “false” impression that a true correction of the literature is taking place. But it is not. Complaint after complaint of mine by these publishers is ignored. Their response, now a standard one: silence. Little do they know that a compilation of all complaints will of course be published and finally, their silence will come back and sting them where it hurts most. Silence by the editors, by the authors and by the publishers is corrupting the system more because it indicates that they have something to hide. So, it’s time to smoke them out, all of them. Until a new structure exists with individuals who no longer need to be smoked out in order to understand what is the concept of “responsibility towards science and society”. The current model in place is egoistic, focuses only the super-id and only advances science and society after it has satisfied a personal need. At all levels. This structure has to be imploded and something totally new has to arise. I should add that my analyses are not always perfect, or fail-safe, but it is important to inculcate a culture of questioning, critiquing and simultaneously, seeking solutions and alternatives. Anyone who knows me, knows that that is what I am trying to achieve. At least for plant science.

          Jaime A. Teixeira da Silva

          March 30, 2014 at 4:05 pm

    • I just finished a review session in which many people had your rather amazing level of ignorance about statistics. Design sucked, statistical plans were terrible, clarity of actual relationship of analysis to SA was absolutely wretched, and they wanted to spend the money of the people of the US for this garbage! Wow, appalling. While as a statistician I do my little bit, what I contribute mostly is scientific rigor and appropriate analysis. If you want money for your research, someone has to be there to ensure that the results have some meaning. In this case with multi-level methods, of course that is almost meaningless – it is like saying “linear models analysis”. There are 20 different types of multilevel analysis, but some attempt to take the cluster/individual effect into account must be made.

      Statistical Observer

      March 29, 2014 at 10:50 pm

      • “Some attempt to take the cluster/individual effect into account must be made”
        There is always a trade-off, and it depends on the domain. Repeated measures designs and corresponding statistical tests already do take into account individual effects, for instance. As I said, the “multi-level methods” and “Bayes” statistical lobbyists have an interest in spreading their fads because that is the way they will get additional funding and other goodies from the Government.

        Jerry Lofti

        March 30, 2014 at 7:45 am

        • Repeated measures designs, done in the contemporary and most appropriate manner, ARE multi-level designs. There’s no difference. There are 3-4 ways of doing the RM analysis, depending on the type of RM considered, and covariances, but they are multi-level models. One of the great problems in science today is lack of reproducibility, and this is often due to insufficient attention to statistical details.

          Statistical Observer

          March 30, 2014 at 8:21 am

          • “There’s no difference”. Exactly my point! Some people are just relabeling classic methods and reinventing the wheel for the purpose of getting $ mileage out of it. The reproducibility problem is due to a single issue: funding and demands on publication rates. With adequate funding and less urgency to publish more and more, one would double or triple the N by default. That requires more money and more time to complete the study (and publish it). It’s pretty simple, really.

            Jerry Lofti

            March 30, 2014 at 11:54 am

  3. Not particularly impressed by Mina Bissel.
    If I understand her letter correctly there was one major discrepancy between the two groups – one was getting significant cell surface levels of CD44, the other was not. This was isolated down to a problem of proteolysis.
    However, the fact that CD44 is extremely sensitive to trypsin has been known for almost 20 years – if I may refer to: “De Novo CD44 Expression by Proliferating Mesangial Cells
    in Rat Anti4hy-1 Nephritis1″ by Nikolic-Paterson et al. Journal of the American Society of Nephrology
    “Mesangial cell expression
    of the CD44 protein was shown in two ways.
    Cell-surface CD44 expression was demonstrated by
    flow cytometry using the OX-50 mAb (Figure 7a). This
    expression was exquisitely sensitive to trypsin, as a
    2-min treatment with 0.25% trypsin removed all CD44
    antigen from the cell surface, whereas the expression
    of other antigens (such as major histocompatibiity
    complex Class II and intracellular adhesion molecube-
    1 ) were unaffected (not shown).”
    They also did Western blot – something I would have thought would have been the first step when Bissel and her collaborators began to show differing results. One gets the distinct impression that Bissel was motivated not to share her sense of wonder as she rediscovered the finding of the sensitivity of proteolysis but rather discredit the idea replication. As she puts it “our two laboratories quite reproducibly were unable to replicate each other’s fluorescence-activated cell sorting (FACS) profiles of primary breast cells.” Of course, the fact is they WERE able to replicate each other’s findings but anyway.. The subtext seems to be that since this individual lack of reproducibility can be traced to simple technical issues then every lack of reproducibility will in the final analysis come down to issues of technique. And that I think is most unlikely.

    I have never heard any seriously suggest that reproducibility or lack thereof should be used to detect cheaters particularly or that is either possible or desirable to reproduce every single paper. It is simply to identify which areas of science don’t reproduce well, have the practitioners in these fields acknowledge this or made aware of it and work out what could be done to improve it. It may be that increasing reproducibility might hamper some scientific research, this needs to be weighed against the potential of the increasing noise filling the scientific literature to also inflict damage on scientific progress.

    Otherwise Life Sciences stands in danger of being like the waiter who constantly makes mistakes in calculating the change, but mysteriously the mistakes always financially benefit him or like the spoon-bender whose undoubted psychic powers are inhibited by the presence of a sceptic in the audience

    littlegreyrabbit

    March 29, 2014 at 3:22 pm

    • Would the appropriate reference for your last paragraph be Red Lights (http://en.wikipedia.org/wiki/Red_Lights_(2012_film)), and is your analogy stating that Bissel is something like Robert De Niro?

      JATdS

      March 29, 2014 at 3:36 pm

      • The best “retraction” today, 31st March, 2014, was the “retraction” of Japan’s permits to do illegal whaling in the Arctic by the International Court of Justice. This science farse had been going on long enough, and I was sick and tired of seeing whale meat in my supermarket, most likely there under the fake banner of “science”, JARPA II. This is a victory for nature, a victory for science, and a victory for humankind.
        Select stories:

        http://news.yahoo.com/court-orders-stay-japanese-antarctic-whaling-115139531.html

        http://www.bloomberg.com/news/2014-03-31/japan-ordered-to-end-whaling-as-hunt-is-not-science-court-rules.html

        http://www.mirror.co.uk/news/world-news/endangered-whales-killed-dog-meat-1919743

        What the scientific community now needs to do, somehow, is closely analyze all and any “research” that was published between 2005 and 2014, apparently by JARPA II. We have the responsibility of not only letting the court make this futuristic decision, but to also examine the quality of the “research” output, in post-publication peer review.

        And, elsewhere in Japan, the STAP cell case continues to make headlines:

        http://dailynews.yahoo.co.jp/fc/science/stap_cells/?id=6112115

        JATdS

        March 31, 2014 at 3:45 pm

        • Of course, perhaps they were just calling it “whale” meat at your market, but who knows what it really was!

          Jerry Lofti

          March 31, 2014 at 3:57 pm

          • Jerry, this is Japan, not China. Although whale hunting is a despicable practice not consistent with human values of the 21st century (whatever those may be…), it is an intrinsic part of Japanese culture, so I am not aiming an attack on the Japanese culture, which has the inherent right to defend itself, as does any other culture. What I am criticising is that profits were being made using the banner of science. Now that a legal ruling has passed, this is a dual victory, as I indicated above. From here on, scientists need to step in and analyze the legitimacy of the JARPA II-led “research” that was published in journals. Could we expect another RIKEN-style or Sato-style scandal to emerge? Most likely yes. Your comment is hinting at dishonesty by the supermarket. I can guarantee this, despite the unfortunate sale of whale meat in supermarkets: that that whale meat is sold 100% honestly. This is probably one of the most difficult things to comprehend about Japanese culture, is the extremely honest defense of something that is, or can be, so dishonest. Related to food, on occasion, products come into Japan from China that are shocking, and remorselessly dishonestly produced. About two years ago, dumplings made in China were being sold, claiming to be meat, but actually made of cardboard with added meat flavours. And this despite rigorous quarantine in Japan. What post-publication peer review also needs to do now is to identify any science that is being used by society in an unethical way. It is absolutely senseless to have these rigorous “ethical” rules in science publishing, and then see published science being abused, in practice, in (and by) society. So, to answer your question, I am 100% confident that the supermaker was selling me whale meat, which was often (shockingly truthfully) labelled with the exact origin, and the exact breed of whale.

            I tried to find some information about JARPA II-related publications, but not easy to identify actual published papers. This will take some concerted effort, most likely because there is a strong possibility that much of the “scientific” literature, if in fact any was published, may have been in Japanese. Some leads here, nonetheless, for the whaling scientists among RW readers:

            https://events.iwc.int/index.php/workshops/JARPAIIRW0214/paper/viewFile/601/581/SC-F14-O08.pdf

            http://www.icrwhale.org/scJARPA.html (130 peer reviewed publications are claimed here)
            http://www.icrwhale.org/eng/61JARPAResearchResults.pdf (slightly older version, but with some different revelations)
            https://events.iwc.int/index.php/workshops/JARPAIIRW0214/paper/viewFile/599/579/SC-F14-O06.pdf (this paper is extremely important, because it suggests that some, possibly alot, of research is actually flawed, and thus should be subject to retractions, or at least corrigenda. Without a doubt, this paper, and hopefully my comments, may spur some of the more curiously defensive to start to examine this literature.). THis report claims, very boldly:
            “JARPA and JARPA II have collected a large number of observations from Antarctic minke whales taken under Special Permit. However large numbers of observations do not necessarily lead to reliable inferences about the demography of the species… The substance of the criticism was that the published analyses could not be considered reliable because of unaccounted for sources of variation… While some models suggest statistically significant trends in some body condition parameters others do not, with the conclusion that the data collected during JARPA and JARPA II are insufficient for determining whether there have been trends in Antarctic minke whale body condition or otherwise.”

            JATdS

            March 31, 2014 at 4:35 pm

  4. This is because neuroscience and esp. stem cell research are both fields attracting a lot of public attention (both for similar reasons – stem cells as the fountain of youth and neuroscience in the hope to find brain turbo chargers or deal with dementia etc.). It seems that wherever money can be made (and research grants and tenures mean money) a certain amount of fraud becomes inevitable. On the other hand I would assume that in these competitive fields fellow researchers are equally more interested in exposing fraud because that makes their position better relatively. What can’t be helped, and what that college dean seems to allude to, is that the more data capture is digital (remember the old lab books with Liebig’s or Rutherford’s or Curie’s handwriting?) the easier it becomes to forge, even to invent whole data series. Done right, they can even be immunized against statistical detection methods by mimicking the spectrum distribution that naturally measured date would show too.

    crisismaven

    June 24, 2014 at 9:24 am


We welcome comments. Please read our comments policy at http://retractionwatch.wordpress.com/the-retraction-watch-faq/ and leave your comment below.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 35,951 other followers

%d bloggers like this: