White House takes notice of reproducibility in science, and wants your opinion

ostpThe White House’s Office of Science and Technology Policy (OSTP) is taking a look at innovation and scientific research, and issues of reproducibility have made it onto its radar.

Here’s the description of the project from the Federal Register:

The Office of Science and Technology Policy and the National Economic Council request public comments to provide input into an upcoming update of the Strategy for American Innovation, which helps to guide the Administration’s efforts to promote lasting economic growth and competitiveness through policies that support transformative American innovation in products, processes, and services and spur new fundamental discoveries that in the long run lead to growing economic prosperity and rising living standards. These efforts include policies to promote critical components of the American innovation ecosystem, including scientific research and development (R&D), technical workforce, entrepreneurship, technology commercialization, advanced manufacturing, and others. The strategy also provides an important framework to channel these Federal investments in innovation capacity towards innovative activity for specific national priorities. The public input provided through this notice will inform the deliberations of the National Economic Council and the Office of Science and Technology Policy, which are together responsible for publishing an updated Strategy for American Innovation.

And here’s what’s catching the eye of people interested in scientific reproducibility:

(11) Given recent evidence of the irreproducibility of a surprising number of published scientific findings, how can the Federal Government leverage its role as a significant funder of scientific research to most effectively address the problem?

The OSTP is the same office that, in 2013, took what Nature called “a long-awaited leap forward for open access” when it said “that publications from taxpayer-funded research should be made free to read after a year’s delay.” That OSTP memo came after more than 65,000 people “signed a We the People petition asking for expanded public access to the results of taxpayer-funded research.”

Have ideas on improving reproducibility? Emails to [email protected] are preferred, according to the notice, which also explains how to fax or mail comments. The deadline is September 23.

31 thoughts on “White House takes notice of reproducibility in science, and wants your opinion”

  1. “(11) Given recent evidence of the irreproducibility of a surprising number of published scientific findings, how can the Federal Government leverage its role as a significant funder of scientific research to most effectively address the problem?”

    Uh. . . well, providing stable funding, including for repeat studies (which, in themselves, would not be considered innovative), longitudinal studies (likewise, in many cases), and the publication of negative results, would be a great help!

    1. +1
      We spend millions for grants to produce something new, but if you just want to repeat experiments, you’re on your own. Also there are too many short term positions, but very few intermediate or long ones. Many labs use fixed term scientists (phd, postdoc) as cheap labor, and if there’s no one to transfer the knowledge before they leave, the know-how is gone.

  2. The incentives to double-check the work of others would have to be changed. The system is configured to maximize production of novelty. What is the incentive to assure reproducibility, short of imposing a cross-validation requirement with a “collaborator” in a different lab in a different university?

    I suppose the other possibility is converting NIH intramural research into a cross-validation enterprise. The next probability may be to gear undergraduate research programs, post-bac research and summer research programs into make-work programs that validate individual components of published research.

  3. Change the law back to requiring all government-paid work to become public domain again. Then we can look at all that is created and improve it further.

  4. Apropos of this discussion, there is a special issue of the journal “Publications” (http://www.mdpi.com/journal/publications/special_issues/scientific-publishing) devoted to “Misconduct in Science” edited by R Grant Steen. Our paper (HZ Hill and JH Pitt “Failure to replicate: A sign of scientific misconduct?” was just accepted on Monday. In it, we were able to analyze raw data from two publications in the journal “Radiation Research” and we emphasize the importance of 1.) familiarity with the literature in the field 2.) access to raw data used to construct graphs in the figures in reports and grant applications and 3.) the power of statistics to smoke out questionable numerical data. The 7 other articles in this special issue are well worth looking at, as well.

    1. I have concerns about at least one paper in that special issue [1], and Beall has registered concerns about the publisher MDPI [2]. These should also be taken into serious consideration by the White House. 1/7 papers = 14% of the special issue.
      [1] http://retractionwatch.com/2014/01/27/a-rating-system-for-retractions-how-various-journals-stack-up/
      [2] http://scholarlyoa.com/2014/02/18/chinese-publishner-mdpi-added-to-list-of-questionable-publishers/
      It is time to record the concerns about the Bilbrey et al. paper in PubPeer.

      1. I have a great deal of respect for Jeffry Beall. but he is, after all, only one person making decisions about the publishers and journals on his list. This has gotten to the point by now that it should be taken over by a committee, commission, whatever. My interactions with the Editor and editorial staff of the “Publications” journal were extremely professional. Our paper was not an easy one to write and Dr Pitt and I have been trying to publish our analyses of those data and more for almost 2 years. We have experienced more than 10 rejections and in most cases we never made it past the editorial office. We received 3 very favorable reviews from one journal but the EIC had cold feet and sent it to a reviewer that I suspect he knew would trash it and he did. Our paper in “Publications” had 6 reviews before it was finally accepted. It is also important to know that there was no publication fee and I believe that that is the strongest reason (only in it for the money) for publishers and journals being on Beall’s list.

        1. Dear Dr. Hill, I was not actually being critical of your paper. I simply pointed out that I believe that one paper in the same special issue has problems, and even though those problems were reported to the publisher, to the guest editor and in fact to the authors and all co-editors, not a single erratum has yet appeared. IF they don’t like my criticisms, and even if the publisher has apparently blocked my access to MDPI’s submission system, then why has the editor board not independently published its own erratum, which I believe is necessary?

          Separately, as you may be aware, I have other criticisms of my own about Beall, his blog and his interpretations. But close examination of Beall’s criticisms of MDPI, and other blogs that also criticize MDPI, will show that that publisher has problems, several of them academic. Once again, I reiterate that these issues are most likely highly independent of your own paper, but issues that need to be pointed out and discussed openly, especially since you are recommending this special issue and publisher to the White House, and because it is you who drew attention to your paper and that special issue. So let them then see the head and tails of the MDPI coin. They can then make their own independent decision. Regarding your own paper, I can empathize with the fact that you passed through 10 rejections first, and then another 6 rounds of reviews before the paper was accepted. The key question is: why was the paper rejected 10 times? Maybe you could share the reasons for rejection with us, and by what journals/publishers. These issues are not frequently discussed, but could advance the overall academic discussion related to this story and to retractions overall. As for the advantages of not paying anything to publish, you have the full weight of my support there.

          1. The most frequent reason was that our paper would not be found interesting by the journals’ readers. If you want to know more, please go to my website http://www.helenezhill.com and look under the tab for publishing. You can follow the links that are underlined and highlighted in red.

  5. My advice to the fed, overall: Put more money into NIH extramural funding! The DOD gets 15% of the fed budget, while the total research enterprise (NIH, NSF and DOD medical research) gets about 1% all together. Defense is critical, but so are health and innovation. Let’s also remember that the first iteration of the NIH was established to mobilize science in the service of the nation, as Roosevelt said: “…not only guns and airplanes…” but science.

    My advice with regard to the reproducibility issue: make a biostats class mandatory for PhD students at institutions receiving federal funds. I managed to get through undergrad and a PhD without a single course in stats. I’m only now realizing how important it is for making reproducible claims.

    1. A few suggestions, off the top of my head, as a non-US citizen:
      a) Screen-out those scientists that are tending to publish in proven journals of suspect quality in Beall’s list of “predatory” journals, and are thus squandering public funding on open access that does not ensure adequate quality control.
      b) Use a three-strike rule. Three retractions = a ban on all federal funding for X years; >5 retractions = a ban on all federal funding for X+Y years; > 10 retractions: a total ban on any federal funding. The retractions must be based on the authors’ errors and not on the publisher’s errors.
      c) Scrap ORI, which appears to be doing a snail’s job of justice. Fuse the funding of that organization and only draw the good intellectual base of ORI to form an effective, well-paid and motivated action team that forms a part of OSTP. Make sure that there is one small branch of the DoD with powerful and hawkish lawyers that are bale to retrieve squandered funding, even from personal costs, on individuals who have been involved in academic misconduct. Misconduct does not necessarily have to depend on a verbal or written declaration of intent, it can be observed from a person’s track record and research and/or publishing activity that can be academically judged by a panel of experts in that field of study.
      d) Cut funding for luxuries, including travels, hotels, meals and other non-essential costs that should be borne by the scientists themselves. Too much funding is being wasted on luxuries under the guise of “research-associated costs”.
      e) Down-size government, stop wasting money on wars that are waged overseas, pull back all troops, and spend the saved money on scientists with integrity, honour, passion and ideas. Science forms the base of a society and its advancement. The advancement of science cannot take place on fresh air alone. But the correct attribution of funding to worthy projects, even those that only explore a theme that is purely hypothetical, without necessarily having to derive a “financial” reward down the road, because these are the studies that advance humanity.
      f) Ensure that there is accountability at every step. When a PhD student is screened for funding, what credentials do they have? Factor in the impact factor, which is useful, with the number of retractions and other important elements (see my Global Science Score; [1]); the same applies to any researcher applying for a post-doc, a faculty position, or even an established PI hoping to get an upgrade to a professor position. All must be held accountable each step of the way, and someone should be overseeing this. It doesn’t take a massive work-force to achieve this, just a more efficient one (i.e., a larger government is not necessary). That means that in order for government to be vigilant of science, government must also be responsive to down-sizing and being accountable to society, too. It’s a two-way street, otherwise this initiative will flop.
      g) Build in a contribute-to-society factor to guarantee funding. Make sure that anyone who gets funding puts back into society what tax-payers are giving them. This does not mean fancy speeches over cocktails and exclusive hotel meetings. It means active community projects to educate and motivate the young, employ the elderly and retired and seek new recruits into the science community to enrich it, and make the scientific panorama in the US more dynamic, more diverse, more invigorated, and more liberated of “elitism”.
      h) The US has been overtaken by China both in terms of funding and in terms of volume of published papers. The only way to regain a top position is to ensure a rounded approach and not just draconian measures that have no balance.
      i) Make sure that all processes ae open, and transparent. Corruption and fraud takes place when these two elements are not factored into the equation.

      [1] The Global Science Factor: http://www.globalsciencebooks.info/JournalsSup/images/2013/AAJPSB_7(SI1)/AAJPSB_7(SI1)92-101o.pdf

      1. You have clealry given this list some thought.
        Please tell us why you would only call a first ‘strike’ of temporary suspension from federal funding upon the THIRD retraction;
        the 2nd strike upon the SIXTH retraction (longer suspension);
        and
        the 3rd strike (lifetime bar) after the ELEVENTH?

        Why don’t you think researchers who defraud the govt just once or twice should be sanctioned in any way?
        Or at least required to repay grants they defrauded before they apply for more?
        Would you also wait for a third offense to prosecute other financial crimes like embezzlement and insider trading?

        I think even one retraction for documented research misconduct should bar someone for life from the priviledge of having taxpayers fund their work. There are other funding sources they can apply to if they want to keep doing resesrch independently in academia, or they can get a private sector job.

        The many honest scientists applying for federal funds should not have to compete in blinded reviews against others who have already defrauded the govt.

        1. Albert, it is a good question. The numbers are relative. It is a basal suggestion and spark, one that needs to be hammered out, of course. If you observe quite a few retractions, there does in fact appear to be an element of honest error, or plain ignorance in several cases, which would deserve the retraction (if the error affects the scientific conclusions) and not more than a slap on the wrist. In baseball, you wouldn’t get kicked off the pitch off a single strike, would you? And in soccer, you need at least a yellow card before you get the red (and also the possibilities of a few fouls in between). So, to expect a draconian system that inflicts serious penalties immediately after the first retraction could be dangerous because you may lose honest and hard-working scientists out of fear of the system. I suggested a threshold of 3, 6 and 11, but it could easily be 2, 3 and 4. The numbers are not really that relevant (for now) if the next step of the penalization system is not in place. One has to think about the basic human psychology, I guess, about what might result in justice, re-instill academic rigour, and be a repellant to others who may wish to actively commit academic fraud. I can only think of short-, mid- and long-term bans from journals, publishers and from receiving funding. But one must also remember that situations and people can be recovered. Light criminal sentences, community service, fines to pay back a portion or all of squandered funds would all effectively reform a scientist tha may be straying off the ethical tracks and also serve as a deterrant to others contemplating it actively, provided that the rules and new measures can be enforced. If there s no rapid, and effective enforcement agency, then our discussion is useless.

  6. Publications are the currency in science and the the review of publications is the nexus where all forces promoting bad science meet. Therefore any solution will be best directed at this nexus. The best solution is to move away from the 50 year old model of pre-publication review to a post-publication review of scientific publications. This will destroy the “scientific nepotism” that promotes grant acquisition over good science as the goal of scientists.

    1. Daniel, your use of the term “currency” is key. Science is no longer a pure avenue of intellectual exploration. It is an economic tool (and weapon). That is precisely why the White House is interested, I believe, not really because it is interested in the “reproducibility” factor.

  7. Comment*(11) Given recent evidence of the irreproducibility of a surprising number of published scientific findings, how can the Federal Government leverage its role as a significant funder of scientific research to most effectively address the problem?
    Attempts to repeat experiments are probably a waste of time and money. It should be up to the Universities, companies, publishers and granting agencies to police their own. This can be accomplished by requiring of all submitted documents for funding and/or publication that images be scrutinized for inconsistencies and falsifications, software be used to analyze for numerical discrepancies and plagiarism tests be applied. Granting agencies and publishers should require demonstration of such scrutiny before accepting grant applications or papers. Stopping any cheating at the source should go a long way to addressing the irreproducibility problem.

    1. Not my area, but this paper on how hard it is to reproduce the (honest) work of your collaborators was eye-opening.
      http://www.cell.com/cell-reports/pdf/S2211-1247(14)00121-1.pdf

      It seems to me that allocating more funds for this type of collaborative,multi-institution work where there is cross-validation would go a long way towards improving reproducibility.

      Additionally, promoting explicit sharing of detailed method descriptions (or in my field, code) would help. Of course, there is a cost in terms of effort and time when it comes to such sharing of in-depth know-how of how to make your method work – I am not sure how academia can accommodate that. In industry, my code is immediately checked in (or taken over by an engineer in some form). However, my scripts from my graduate school days are long gone/obsolete – this is changing now in my field as more people are maintaining and sharing code (partially as a result of new government guidelines).

      1. I found the paper that you cite very interesting and what sticks out the most is that the 2 protocols were not the same so I have to wonder why each lab didn’t follow the other’s protocol exactly without having to cross the country? That would have saved a lot of time, energy and money. But then one has to admire their honesty in reporting the solution to their problems. The paper also emphasizes the importance to stating precisely the steps that were followed in protocols in published papers. But that really has nothing to do with making up results.

        1. Right, my comments were mostly about trying to help with reproducibility in the much more common case of people reporting only a subset of what they did and therefore making it very hard to reproduce their work.. This is true in my field and I can only imagine how common it must be in life sciences where there seems to be a lot of specialized know-how which differs from lab to lab.

  8. A few suggestions:

    Reduce the pressure for students/scientist and do not allow short (one year contracts) and give more stable funding. The extreme pressure (publish or perish) at certain US research institutions leads to too much bad and sloppy practice and misconduct.

    Implement more post-publication peer reviewing information (e.g. PubPeer) in grant reviews and editorial/referee work.

    Journals and research institutions should follow a zero tolerance policy for data duplication, manipulations and falsification. There a too easy to get away with this today. If flawed content is detected, there should be initiated an investigation (by an independent committee) and if the original data (not completely new data) is not supporting the published results, the article should be retracted (not corrected) and the scientist should be banned from doing research for a certain time.

    More openness for retractions (more information) and conclusions from misconduct investigations should be officially. This is not the standard at US research institutions today.

    Official information to scientist, grant providers and institutes regarding predatory journals is highly needed.

  9. The OSTP’s own position on open access is part of the problem. Say what you want about the so-called pay publishers (and while we’re at it, let’s concede that both pay and open access publishers are in it for the profit), but a journal that brings in a few thousand currency-of-your-choice from every extra paper published by slapping it on the internet with little or no value added has far less incentive to enforce quality than a journal that depends on its reputation to attract advertisers and subscribers.

  10. 1. Fastest most efficient way to increase the percent of federally funded research that is reproducibile is to reduce the percent that is fraudulent, as this fraction is never reproducible except by more fraudulent science (deliberatively deceptive, plagiarized, duplicative, fabricated, falsified etc).

    2. Fastest way to stop fraudulent research is to require researchers whose papers are retracted for SCIENTIFIC MISCONDUCT (not inadvertent or honest error) to refund all the federal funds involved before they can get another dollar.

    If PIs were held personally responsible for the funds under their control and their institutions also had to pay back their overhead, people would pay a lot more attention to research oversight and quality control.

    No other branch of federal contracting just temporarily suspends contractors or grantees who defrauded the govt without also insisting on restitution plus fines and penalties before reinststing their funding priviledges.

    3. Incentivize public to help clean up the literature and recover wasted federal funds by paying some percentage of recovered funds to whomever first posts evidence on pubmedcommons of scientific misconduct warrating retraction (by whatever explicit definition), even if the retraction does not follow for several years.

    1. Albert, you touch on pertinent points, but if it were so easy as the 1-2-3 steps you suggest, trust me, the problems would have been resolved a long time before RW was even born. This story is not only relevant to the US, it is a global problem, but if a solid solution can be found by using US as the test system, then this would have ripple effects on other countries, especially those that are economic allies of the US. The practical problem I see with your suggestion 1 and 2 is how to differentiate honest error from malice. Unless this can be differentiated, which is usually impossible to do so, unless the author provides an open and honest admission of dishonesty, there is a very strong possibility that any accusation of fraud can be interpreted as libel. So, an author that claims that a duplication was accidental, or that plagiarism was not intentional, may actually be valid defenses, but the cash-for-ratting scheme you propose in 3 is definitely not the correct way to achieve scientific justice, I believe. I do agree that it is essential to recover wasted and lost funding, but to implement a witch-hunt like system is dangerous and provides the wrong incentives. Money must never be the incentive even if it is the main incentive currently in place that motivates a lot of scientists. I have always claimed that if you remove the factor of money, how many scientists would stay in science? Imagine we were to freeze salaries and grants for a year, and remove corrupting factors like the impact factor, I suspect that we would be left with about 10% of the current population of scientists. This would represent the true academic core, from which a new population should then arise, with the proper incentives. The only way to correct the literature moving forward is through passionate post-publication peer review, which could involve tools like PubPeer, PubMed Commons, self-publishing, double-blind traditional peer review, posting on blogs like RW (but not on too many blogs so as not to dilute the effect) and other solutions, but ultimately a complement of steps that takes all of these steps into one “package of measures”. Accountability must be held by all parties: the authors, including the corresponding author, the PIs, the research institutes, the journal, the editors, the “peers”, the editors-in-chief, the journal, and the publisher. Politicians, governments, commerce and corporations are the vultures that feel off the problems and also benefit from the sacrifices and efforts by scientists, but ultimately it is society that benefits, because knowledge brings layers of depth of understanding. So, scientists need to restructure within the mould that they are cocooned by, but also needs to interact with these surrounding influential forces, but not be swayed, or manipulated by them.

      1. Every other federal agency rewards those who report waste fraud and abuse with a share of funds recovered in QuiTam lawsuits. Why should those who disclose medicare or defense contracting fraud get rewarded but not those who disclose research fraud ? it’s all federal taxpayer money.

        Plagiarism, duplicate publication, image manipulation and many other types of research misconduct warranting retraction are actually quite easy to document, as PubPeer comments demonstrate, and don’t require a lot of money or time to find.

        But the fact that there are only about 1/10th as.many signed comments on pubmed commons shows that people are more restrained in what they have to.put their.name.on.

        My proposal was only to pay whistleblowers some share of recovered federal funds. Obviously no one will recover anything from filing allegations of research misconduct that turn out to be false when further investigated.

        And whistleblowers could be held liable for.damages in a libel case but only if they knew their allegations were false when they publicly asserted them.

        Grad students and deadwood faculty would take journal club a lot more.seriously if there was a chance for them to get NIH funding just by finding evidence of research misconduct in already publshed papers that leads to retraction,

        I think a percentage of recovered fees is better than fixed rewards such as those the Justice Dept pays “for information leading to the arrest and conviction” of common criminals. Exposing fraud in $10million study should pay.more than in a $100,000 study.

        To avoid having to litigate all these cases, NIH should require as a condition of funding that institutions promptly repay grants if/when any paper citing the support of the grant is retracted for any type of misconduct.

        And it could also bar grantees from publishing federally funded resesrch in journals that do not have transparent retraction policies consistent with COPE or other guidelines. NIH should also prohibit federally funded research from being published in predatory journals (however defined).

        1. Albert, now this is a veritable discussion! Your points are excellent, especially the comparison with the QuiTam lawsuits. In principle, the proposal sounds good, and those who are out to get research funding off whistle-blowing could be headed for a profitable market because I believe that the number of studies with “errors” (broadly-speaking) is massive, and still largely unexplored. So, there appear to be two key questions that then emerge from your proposal. Firstly, which honest scientist would want to risk their “name” reporting errors in the literature? Are you suggesting that all steps of the whistle-blowing and remuneration are anonymous? I can’t seem to see a system that operates in full transparency if the whistle-blower’s identity is kept anonymous. Assuming that OSTP adopts such a policy, the fact that there exists a pay-per-error/fraud remuneration system would surely feed a hungry pool of opportunists, don’t you think? Perhaps, in an extreme case, to a level where only those who are vocal will be able to secure profitable funding, as you clearly explained in your calculations. It could lead to a situation where “middle-class” scientists (both in terms of ability and in terms of funding) will give up applying for regular funds from the federal government because it could be more profitable to get funds from ratting on colleagues (or the competition). If I could receive a buck for every error I think I have already detected in the plant sciences, I would already be a millionaire by now. But personally, that is not my incentive, so my argument is that to have a monetary incentive might introduce a serious COI into the equation, ultimately annulling the initial proposal you are putting forward. Look, I see where you are coming from, and I understand the angle, but I am highly reticent that it could work without being seriously abused, so I think this idea needs a lot more thought and input from professionals in many fields.

          I also take issue with this suggestion: “And it could also bar grantees from publishing federally funded resesrch in journals that do not have transparent retraction policies consistent with COPE or other guidelines.” Currently, the retraction policies in place are variable, or follow what COPE states. “Other guidelines”? Which ones would those be? Once again, I feel that you are punishing grantees and scientists from making a free choice. You are basically stating support a publisher provided that it follows the COPE ethics.

          As for the “predatory” nature of OA journals, I am assuming that you are referring to Beall’s lists. If yes, then this is extremely dangerous because, although Beall has emerged as the de facto leader in OA critical analysis, his lists are, I believe, seriously flawed, the greatest of which is the lack of a quantification system to quantify the “predation”, which is why I had developed the Predatory Score [1]. So, unless a highly specific scoring system can be applied to journals, scientists and publishers, a blanketed approach to imposing rules and banning scientists from funding could be a dangerous path to follow and needs very deep and detailed reflection before rushed imlementation to boost the dwindling government coffers.
          [1] http://www.globalsciencebooks.info/JournalsSup/images/2013/AAJPSB_7(SI1)/AAJPSB_7(SI1)21-34o.pdf

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.