The Retraction Watch Transparency Index

Projecting transparency, photo by Dennis Sylvester Hurd via Flickr

Retraction Watch turns two on Friday (August 3), and while you might be stumped about what to get us, we’ll make things easy for you. All we want to mark the occasion is your … expertise.

In their August 2012 issue, The Scientist published an opinion piece in which we call for a “Transparency Index” for journals that, in the spirit of the impact factor, will signal to the scientific community how willing editors and publishers are to share how they make decisions. For example, we write, the index could include:

  • The journal’s review protocol, including whether or not its articles are peer-reviewed; the typical number of reviewers, time for review, manuscript acceptance rate, and details of the appeals process
  • Whether the journal requires that underlying data are made available
  • Whether the journal uses plagiarism detection software and reviews figures for evidence of image manipulation
  • The journal’s mechanism for dealing with allegations of errors or misconduct, including whether it investigates such allegations from anonymous whistleblowers
  • Whether corrections and retraction notices are as clear as possible, conforming to accepted publishing ethics guidelines such as those from the Committee on Publication Ethics (COPE) or the International Committee of Medical Journal Editors (ICMJE)

Please read the piece, then post your thoughts here about how we can refine the idea. Which of these criteria should stay? Which should go? How should they be weighted?

26 thoughts on “The Retraction Watch Transparency Index”

  1. Transparency Index (TI) is a great idea which should promote and encourage best practice, while exposing publication malpractice. In order to be useful TI should reflect predominantly one thing which is:
    “Did the journal (editor and publisher) do the right thing?”

    Retraction can happen due to variety of different reasons.
    Many retractions might indicate poor reviewers’ and/or editor’s practice. However, in other cases a retraction might indicate the opposite – that the editor did the right thing after discovering irregularities about which s/he was unaware at the time of publication.

    So, TI based only on No. of retractions from the total No. of publications will blur the real picture.

    On the other hand, a journal (editor and publisher) might have brilliant policies/frameworks/procedures to deal with misconduct, which, however, are never (or seldom) used in practice. Such duplicity would indicate malpractice which intends to cover up previous malpractice. Thus such state of affairs would be even more dangerous as it intends to further deceive the readers.

    Therefore, I think that TI should have as one of its main components:
    Do-the-Right-Thing (DRT), which can be measured by:
    No. of retractions from No. of cases (with evidence about misconduct) brought to the attention of the journal.

    In ideal world DRT should be 1.
    Example 1:
    8 cases about irregularities, all backed up with evidence were reported to the attention of the journal, and the journal retracted all 8 articles (8/8=1).

    Example 2:
    8 cases about irregularities, all backed up with evidence were reported to the attention of the journal, and the journal retracted only 2 articles (2/8=1/4=0.25).

    Provided that both journals have in place nice Frameworks to deal with misconduct, the later example would present the journal in a better light, while in reality the second journal is much worse, since it refuses (for whatever reason) to do the right thing.

  2. Transparency is key to solution of most problems.
    Index systems do not work ; science citaion index , impact factor , h-index , etc.
    These systems and who runs these are not transparent.
    At first they must be forced to be transparent.
    Billion dollars are flooding worldwide annually.
    No one wants to lessen the amount of this which pours into his pocket.
    There are some countries academic fraud is official policy.

  3. This doesn’t solve the problem of a journal investigating alleged misconduct and doing little or nothing about it. The issue is who is to judge the evidence? Journals cannot be entrusted with this task, since they have as much at stake in their metrics as authors. The commercial success of a journal depends on it flourishing and this is measured to a large extent by metrics. Regardless of the model (open access, learned society, commercial sector) the drive is identical. Dropping metric means dropping income. Authors are in the same boat, “poor” publication record means no tenure or promotion.

    Unfortunately it is difficult to see how science can extricate itself from this vicious cycle, since the selection pressure rewards corrupt practice.

    One solution might be for all papers to have a “confidence poll”, where readers rate their confidence in the data. When the level of confidence drops below a threshold, the journal has to investigate and justify, through peer review its decision to maintain or retract the paper. This should protect against organised bullying, but will root out fraud. Voting may need to be made semi compulsory to ensure sufficient input, which might be achieved by forcing all users, including those with an institutional subscription registering and being obliged every 5 or 10 downloads to vote, otherwise they lose download capability for a month.

    1. I agree with you. This is more likely to control these issues but…. who will enforce it? That’s part of the problem.

  4. Perhaps Institutions need their own transparency index.

    I think there is a limit to what journals can be expected to do outside the limited areas of plagiarism, duplication and the rather odd problem of image manipulation (odd because in 9/10 cases I have seen just a tiny bit of work on the part of the authors would have rendered their fraud undetectable to readers).

    Any serious investigation needs to interview people and look at lab records and raw data and usually journals can’t do that.
    So perhaps a tracking system that monitors how institutions respond and meet their obligations to investigating allegations of scientific misconduct is in order. Candidate #1 might be the University of Mainz.

  5. Today, October 22, marks an important cornerstone in Doing-the-Right-Thing.
    Lance Armstrong has been stripped of his 7 Tour de France titles.
    The longest cheating has finally ended.

    What it has to do with Retraction Watch and Transparency Index?
    Armstrong’s case is an important lesson for all cheaters in academic publishing, and especially for all publishers. Furthermore, unlike in sport where a cheat is a moment in time, academic cheat becomes PERMANENT once it is published. The Academia should realise that the Truth comes out, sooner or later, so spare yourself a Fall-From-Grace and Do-the-Right-Thing. Transparency Index provides the opportunity for editors/publishers/institutions to come clean when cheats are reported. So, instead of ignoring TI, say a big “Thank you” to RW and Do-the-Right-Thing.

    A good test-case would be the publication of Benach and Muntaner in Gaceta Sanitaria, Elsevier, featured here http://www.retractionwatch.com/2012/09/20/slew-of-retractions-appears-in-neuroscience-letters/) as this case represents multiple duplications plus copyright irregularities.

  6. Today marks another important cornerstone in Doing-the-Right-Thing.

    Julia Gillard, Prime Minister of Australia (backed up from all parties in the Parliament, including the opposition) has announced Royal Commission (the highest form of independent public inquiry) for investigating decades of child sex abuse by priests from the Catholic Church in Australia. More than six decades of persistent, systemic cover up of thousands of cases of child sex abuse have ended. After the longest cheating in the history has ended (see my comments above), now the longest cover up has ended.

    PM Gillard said: “I believe we must do everything we can to make sure that what has happened in the past is never allowed to happen again”, and also: “We need to learn the lessons about how institutions can best respond when there are allegations of sexual abuse of children”.
    Australia’s most senior Catholic, Cardinal George Pell said: “I believe the air should be cleared and the truth uncovered. We shall co-operate fully with the royal commission”.

    What it has to do with Retraction Watch and Transparency Index?

    The Academia should learn an important lesson from this case: that Cover up of misconduct can not last forever, even for the most powerful organisation in the world – the Catholic Church.
    Just as the Church is now judged not by How it treats its over one billion members, but by How it responds to the cases of child sex abuse (someone on RW might say that it’s not important as it represents less than 0.01 %), the Academia/publishers/COPE are judged by How they respond to the cases of misconduct!

    Transparency Index provides the opportunity for editors/publishers/institutions to come clean when cheats are reported. Otherwise, should these decide to cover up the misconduct, they will be held accountable one day just as the Church is now.

    Elsevier and COPE, you still have the chance to come clean about the multiple misconduct, including copyright violations, committed by Joan Benach and Carles Muntaner (featured in my comments here http://www.retractionwatch.com/2012/11/07/make-it-a-double-alcohol-treatment-study-pulled-for-duplication/#comments).
    Do it now or face the consequences later!

  7. For human subjects studies, one additional cross check would be to examine the Clinicaltrials.gov registration history. In my recent analysis, roughly 50% of medical research studies register after first subject/first visit – on average >200 days after. This invites all sorts of opportunities for abuse, from redefining the study endpoints based on a first look of data, to changing entry criteria, to modifying the statistical analysis plan, to pulling the plug on the study entirely and acting as if it had never happened. Publication in ICMJE journals is currently contingent on trial registration; one step further would be to require that registration in fact preceded start of the study, or, in situations where it did not, for the authors to give a satisfactory explanation for why it did not.

    See Gill CJ “How Often do US-based human subjects research studies register on time, and how often do they post their results? A statistical analysis of the Clinicaltrials.gov database.” BMJ Open 2012.

    1. And I have located cases of people registering medical trials which are never performed at all. In one case, a dentist (with a marketing degree) registered medical trials in order to lend an air of legitimacy to the promotion of a personal theory of disease in patient communities with absolutely no supporting data at all. The spread of this theory was known to have promoted ‘back channel’ sales and trades of medications amongst patients. In this type of case, the opposite effect of simply registering a trial belatedly could result in actual harm to those who believe the registration implies some study was undertaken.

      If there is to be any matching of actual trials to trials registered, it needs to work in _both_ directions!

  8. T.I. is a great idea, but is a workaround.

    The fundamental problem is the funding and its ties to intellectual property. As long as Pharma sponsors research, the conflict will continue. Pharma was designed to innovate, not do scientific research (that is the job of Universities and the NIH).

    Industry should not be asked to do scientific research – but rather do safety and preliminary efficacy. The FDA, with its licensing and user fees subsidy, will never become the watchdog it was intended to be. Let its role be limited to continue being the industry hack it has become.

    A solution is for the scientific community to quote ONLY non-industry funded research, and for the NIH to sponsor large confirmation trials. No more tenure and large resumes of Pharma and Device manufacturing sponsored “trials”.

    Relieve the industry of a large expense, get their medications to market faster, then let the NIH sponsor post marketing surveillance and large confirmation trial – with no conflicts. These trials should also have safety, efficacy, numbers-needed-to-treat clinical significance endpoints, and a cost analysis, with a long term clinical feedback through post marketing surveillance registry.

    Maybe some non-patent-able “drugs” can be studied as well. Now that would be revolutionary.

    Then perhaps we can move to using these studies to write real guidelines.
    Until then money will override interests of Science and the Patient.

    1. Here, here. One worries, though, that the problem is even larger, given that politics influences the awarding of NIH grants, and that corporations give money to congressional candidates. As an interim step, transparency should include a black box warning on every published scientific article detailing the authors’ financial relationships with corporate sponsors. The black box warning should include the amounts and dates of the payments and names of relevant products, so that the reasons for corporate interest may be clear and obvious to readers.

  9. “transparency should include a black box warning on every published scientific article detailing the authors’ financial relationships with corporate sponsors”
    It should indeed include this. Many (most?) journals require that authors declare vested interests in regard to their studies. But if they “declare we have no financial interests that could in anyway impact…” and the like, who is responsible for checking this is actually true? And how is this to be checked? At journals, we are already very hard pressed for time and resources, we sadly are simply unable to run a full check on everyone. That’s without mentioning possible inaccessibility to such data anyway, protected perhaps by law in some cases.
    The whole foundation of science is, in theory at least, based upon impartiality and accuracy of documenting and analysing data, without regard to what we would like the results to be. Technology has allowed us greater access to other people’s data and its easier manipulation, whether that be in proper analyses of real data, or in fabrication or manipulation of images, text and data. Cheating seems to have become much, much easier if you are so inclined. Due to the sheer number of publications produced these days, monitoring articles for such behaviour is practically impossible.
    For those who work at journals, catching such practices is imperative, but I have difficulty in believing it is achievable without substantial help. For me, very good antiplagiarism software should be available free – the free versions I am aware of I have found to be inadequate for journal purposes. Governments should be persuaded that it is also in their interest to catch the cheaters – hence why I believe software to help achieve this should be given for free.
    Whilst aware that there are problems in good science, I only just discovered this blog and have spent the past few hours in dispair at the extent of the problem we are facing. (I was already in deep dispair at the steep decline in language quality I am seeing in print as time goes by!) One can only hope that institutions and governments will step up to the plate to demand better from Academia and those of our society entrusted with forging the way forward for all Mankind.

  10. Proposal for new project.

    Transparency index would be a bureaucratic nightmare. The private web sites is also a bad idea, they cannot do the job. I say this because they are written by third parties and because the response from the accused is not there. The sites publish selectively; RW apparently works 24/7. And the sites are sued… The fraud, meanwhile, spreads.

    I propose here a project for a web site, double indexed by the area of science and country, that will publish ALLEGATIONS signed with true names of the complainers. As I wrote elsewhere on RW, allegations are protected against law suites. They can contain names, documents, links, all proof needed. As the people mature and RW made quick progress helping them to mature, the idea of “Allegations of Fraud and Misconduct in Science” (or under similar title) transpires as the right and necessary solution. When I read comments on RW, I see that the individual cases very often (not just in my case) hit the dead end because most of the complaints are fragmentary, the responses are arbitrary, the responses from the accused individuals and organisations are absent. Currently, scientific community and the complainers have no means to get this response, although, I guess, a lot of private correspondence is influencing the outcomes.

    The public knowledge, the transparency, since times immemorial, was the key to the justice. Under various pretexts, scientific journals suppress this knowledge.

    The new site must be the mechanism of getting the answer from the accused. The absence of response will be judged by the community accordingly, even though there is no mechanism to force such. It will be the responsibility of the site to send notice of publication of allegations to the accused. I do not see any barrier to full and generous funding of such site from a number of international bodies, including UNESCO. Publishing allegations remains the subject of some 90% of all press reports, it is 100% legal and 100% needed. Public universities are now signing scores of what I would call community obligations, they will sign obligation to respond on this one. Journals will inform public of such new site.

    Just work out the proposals for the forms to be filled on this site, my eager to stamp out the fraud in science fellow scientists! Here is how our common dream of eternal transparency can be realised. Who is going to support this project and participate?

  11. Yes . . . the pressure to publish is a huge problem in academia and supersedes the obligation to teach at many universities. I remember attending a meeting where PhD candidates who also taught were told to “spend as little time as possible in teaching and devote yourself to getting published in top journals.” That same message was also conveyed at a department meeting of another top business school where I taught. (When I later led that school in publishing top-rated journal articles, I was nonetheless told by that same speaker to focus more on my teaching)!

    Although self-plagiarism and publishing incestuous content (i.e., repeating your own research without adding incremental value) are certainly academic problems, my colleagues and I have found that students, subject matter experts in ethics, and business school deans concur that there are far greater problems in academia. Our paper, “in press” at the Journal of Business Ethics, emphasizes that we need to carefully reassess our focus on academia’s obligation to its stakeholders.

    “It’s academic” has earned the title of being irrelevant, out of touch with reality, and ivory tower in application. We need tp “define reality,” as Max DePree (2004, p. 11) so wisely wrote in focusing on our moral duties.

  12. Assume that a Transparency Index is a means to improve error (intentional or otherwise) detection, confidence assessment and editorial processes. I thought the original guidelines, which publishers should put on their online mastheads, were quite good. Possible additions:

    – How editors / reviewers are selected
    – Whether submissions are reviewed anonymously
    – Number of reviewers assigned to a paper and how they are assigned
    – Selection method: majority of positive reviewers? Unanimous?
    – Recourse and remediation practices if any for rejections
    – Count of publications by the authors and editors the journal (possible conflicts of interest and objectivity)
    – Workflow and timeframe for publication (important for prospective authors)
    – Metric for publicly available references, datasets, figures, tables
    – Multistage review process which includes preliminary review, then secondary community review
    – Software in place for plagiarism detection

    1. It seems to me that knowlengr gave excellent suggestions. I wonder if the criteria according to which reviewers’ job is assessed “ex post” (thus contributing to keeping the same reviewers or changing them) could be included in the list.

  13. Good points knowlengr. Talking about editors and reviewers, I understand it must be hard to find good editors and reviewers available but there should be limits. I was intrigued when I came across an Editor in Chief of three scientific journals (at same time) who is just a grad student, has only half a dozen publications and never held a faculty position. This is unacceptable, in my humble opinion

  14. Four retractions in Društvo defektologa Vojvodine, Novi Sad, Serbia from the same author from Republic of Macedonia

    In the August 12,2015 [http://www.defektolozi.org/index.php/vesti/234-retraction] and August 19, 2015 [http://www.defektolozi.org/index.php/vesti/236-ponistavanje-rada-retraction] Društvo defektologa Vojvodine, Novi Sad, Serbia published four retractions from the same author from Republic of Macedonia:
    Panova Gordana, Džidrova Violeta, Panova Blagica, Nikolovska Lenče (2015) Treatment and representation of autism in adolescent. In: Tematski zbornik radova međunarodnog značaja “Aktuelna defektološka praksa “, pp: 227-236. 2015, Novi Sad, Srbija. ISBN 978-86-913605-7-3 [http://www.defektolozi.org/images/dokumenti/zbornik%20radova%20zrenjanin%202015.pdf].
    Panova, Gordana (2014) Nееd of preschool education children with disabilities, as prerequisite for successful integrative education. In: Konferencija sa međunarodnim učešćem “Defektološki rad sa decom na ranom uzrastu”, pp: 41, 14 June 2014, Zbornik rezimea, Novi Sad, Srbija [http://www.defektolozi.org/images/dokumenti/Zbornik_jun2014.pdf].
    Panova, Gordana (2014) Work with children with development problems. In: Konferencija sa međunarodnim učešćem “Defektološki rad sa decom na ranom uzrastu”, pp: 25, 14 June 2014, Zbornik rezumea, Novi Sad, Srbija [http://www.defektolozi.org/images/dokumenti/Zbornik_jun2014.pdf].
    Panova, Gordana and Taseva, Elena and Panova, Blagica (2010) Problems and treatment of children with autism in Eastern Macedonia. In: Tematski zbornik radova Prve međunarodne konferencije “Specijalna edukacija i rehabilitacija – nauka i/ili praksa”. Pp: 96-107. ISSN 978-86-913605-1-1 [http://www.defektolozi.org/images/dokumenti/Zbornik_rezimea.pdf].
    Prof. Dr. Gordana Panova is professor at the Faculty for Medical Sciences, University Goce Delchev, Shtip, Republic of Macedonia (https://scholar.google.com/citations?user=D7CDi7EAAAAJ&hl=en).

    Prof. Dr. Mirko Spiroski
    Editor-in-Chife,
    Open Access Macedonian Journal of Medical Sciences,
    Skopje, Republic of Macedonia

  15. While there are some excellent ideas, journals only really represent one stage in a process that at every stage should limit the publication of bad science. Many funding agencies have developed guidelines and have fixed requirements for the award of grants and then state they don’t have the resources to police these. They have committees who assess the applications that are heavily influenced by social biases which should stop a lot of studies. There are few quality systems in place to monitor how grants are being used. Then there are the Universities that are tasked with the oversight of researchers while having an obvious financial incentive to ignore problems. Again there are often clear policies in place, but these are operationalised internally and not subject to external examination. Once again these should prevent bad research ever reaching the journals. In fact it is often the publicity departments of Universities that are responsible for over-hyped pre-publication claims.
    So I do have some sympathy with the view that perhaps to much of the responsibility lies with them, but that doesn’t mean that a transparency index wouldn’t be helpful. Having a similar set of quality indicators for the other stakeholders, particularly Universities would be even more useful. However as there are few examples of any negative consequences for the people involved at any of these stages and the total reliance on market forces controlling misconduct is at best highly optimistic I wonder if this is the first priority. Until there are quality systems in place that actually act on issues of responsibility and accountability which amazingly appears almost none existent and involved billions of dollars of public money, these indexes are really just playing with the problem.

  16. As regards the first point (“The journal’s review protocol…”), there is a nice project SciRev (http://www.scirev.org), where such metrics are collected and nicely presented (dynamic database) from authors/on submission side. Of course, the sample sizes are still very small… but: if community wants a free and neutral platform – the community shell invest time and fill-in submission experience survey. I think, especially authors of this blog, check the project/site – and possibly make contact with the people there. There is room for cooperation, I believe.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.