When researchers raised concerns about a 2009 Science paper regarding a new way to screen for enzymatic activity, the lead author’s institution launched an investigation. The paper was ultimately retracted in 2010, citing “errors and omissions.”
It would seem from this example that the publishing process worked, and science’s ability to self-correct cleaned up the record. But not so to researchers Ferric Fang and Arturo Casadevall.
Fang, of the University of Washington, Seattle, and Casadevall, of Johns Hopkins — who have made names for themselves by studying retractions — note today in an article for Chemistry World that
there is more to this story than meets the eye.
As Fang (a member of the Board of Directors of our parent organization) and Casadevall recount in “The illusion of self-correction:”
Last year, Pere Puigdomènech, chair of the CSIC ethics committee, wrote a commentary on the Spanish experience with research integrity issues that included a brief discussion of the reactome array paper. This caught the attention of Thomas Hettinger, a chemist at the University of Connecticut, US, who after reading the initial Science paper had performed a forensic analysis of the mass spectrometry data. To Hettinger’s dismay, he found that the authors had mistakenly used molecular weight instead of molecular mass in predicting the mass values of the metabolites.4 The former is simply the sum of the atomic weights of the component elements; the latter is calculated from the atomic masses of the dominant isotopes. He was also perturbed by the highly consistent relationship between the values calculated and those found by the array, and the listing of found values for some ‘impossible’ compounds. He concluded that the ‘lack of variation of the found values proves that they have been fabricated’. Hettinger contacted Science with his findings, but the journal told him that investigation of possible misconduct is the responsibility of the institution, not the journal.
Last author Manuel Ferrer at the Institute of Catalysis and Petrochemistry (ICP) Madrid and first author Ana Beloqui at the Karlsruhe Institute of Technology in Germany didn’t respond to requests for comment, the article notes.
The first author, Ana Beloqui, has previously denied any wrongdoing and stated that ‘the array generally worked as anticipated’.
When we contacted Ferrer in 2010 about the retraction, he told us he was still using the technology:
At the moment the most clear statement I can provide to you is that few well known international companies have successfully used the metabolic array technology and that we continue using it.
At the moment we are working in two complementary lines. The first one is to make the array available for research academic laboratories and to provide to international well recognized companies. In both cases, the technology have been successfully employed. In the last case the CSIC aproved the testing assays by companies. The second is that some of the authors started a exhaustive analysis of the data and we are doing additional analyses to get published again the chemical part of the paper.
As I mentioned several times, I agree with the fact that some errors were in the paper, although does not means that the technology is invalid. This was the reason to retract the paper.
This incident raises questions about science’s ability to self-correct, Fang and Casadevall argue:
Science can be self-correcting, but this requires the concerted efforts of scientists, journals, institutions and governments. The self-correction process lacks transparency and consistency, and many potential conflicts of interest may interfere along the way. The reactome array paper retraction notice cites only error on the part of the authors and not the wider concerns.
The story of this reactome retraction also raises concerns about how allegations of misconduct are handled in many countries, they add:
In Spain, as in many other countries, there is no independent agency with the authority to oversee investigations of alleged research misconduct. It is time for countries, journals, scientific societies and funding agencies to forge a consensus for a uniform approach to the problem of questionable papers in the literature. As a community, we can and must do better. The credibility of science is at stake.
Fang told Retraction Watch he believes this is not the only retraction that deserves a second look:
In our 2012 PNAS paper, Arturo, Grant [Steen] and I found many examples of papers retracted for misconduct or suspected misconduct in which the retraction notice gave the impression of honest error.
Some of those examples were cases we had covered, and one of the messages of the 2012 paper is that basing statistics on retraction notices understates the frequency of misconduct as a reason for retraction. Previous studies had found fewer than half of retractions were due to misconduct, while the 2012 paper found misconduct in two-thirds.
Fang continued:
In a follow-up 2014 FASEB Journal paper, we listed a number of papers containing serious errors that had not been retracted. This suggests that authors are reluctant to retract papers, and that they may prefer to give the impression of honest error even when there is evidence of misconduct. We feel that the ‘reactome array’ paper illustrates the latter point. It is unlikely that this represents a unique case, but I cannot specify other instances.
They uncovered this particular case when researching larger questions in publishing, Fang added:
We were invited by Philip Robinson to provide a commentary for Chemistry World relating to deficiencies in scientific research and possible solutions. I looked at some retracted papers from the field of chemistry and came across the ‘reactome array’ paper along with Thomas Hettinger’s analysis. It was surprising to see that this story had not been previously covered in greater depth. There were many articles following the initial publication and subsequent furor, but it seemed that interest among science journalists waned as soon as the paper was retracted. This led us to contact the parties involved and try to understand what had occurred. Dr. Hettinger and Dr. Puigdomenech were particularly helpful.
Fang reaffirmed how this case illustrates the importance of a coordinated effort to investigate allegations about research:
The big picture is that the way in which problematic research and misconduct are currently approached is haphazard and inconsistent. Science would benefit by a consensus on the responsibilities of countries, institutions, journals and individuals when potential problems are identified.
Casadevall further emphasized this point to us:
A key point in the paper is that for science to be really ‘self-correcting’ requires collaboration from all parties including scientists, journals, institutions and governments. What is needed is a consensus of how to proceed in correcting science once a problem is identified. That is a discussion that the scientific community needs to have and it must be an international conversation.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.
The international harmonization of research misconduct definitions and policies is way overdue (see also David Resnik’s paper on this subject, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3965194/) and so is the need to enforce such policies at a global level. However, perhaps because there are so many obstacles to overcome (e.g., politics, culture) to make this harmonization possible, I really don’t see the group-will, let alone the necessary financial resources and support from the international community, to meet this important need. I wish that major global entities, such as the WHO or UNESCO and other major science organizations (e.g., AAAS) and funding agencies (NSF, ERC) would recognize the importance of this problem, get together and take appropriate action.
Michael Levine challenges the latest Casadevall paper at PubMed Commons and PubPeer, stating: “While the resulting analysis claims that efficiency has decreased in the last decade, we argue that (i)-the analysis performed is insufficient to make that claim, and (ii)- the findings do not support the conjecture that a lack of relevance or rigor in biomedical research is causing stagnation in medicine and public health.”
https://pubpeer.com/publications/63617FD03DADDEAE8402858637853D
“Increasing disparities between resource inputs and outcomes, as measured by certain health deliverables, in biomedical research”
Anthony Bowen, Arturo Casadevall, Proc. Natl. Acad. Sci. U.S.A., 112 (2015)
http://www.pnas.org/content/112/36/11335