For all our talk about finding new ways to measure scientists’ achievements, we’ve yet to veer away from a focus on publishing high-impact papers. This sets up a system of perverse incentives, fueling ongoing problems with reproducibility and misconduct. Is there another way? Sarah Greene, founder of Rapid Science, believes there is – and we’re pleased to present her guest post describing a new way to rate researchers’ collaborative projects.
In science, we still live – and die – by the published paper. Discontent with the Impact Factor, the H-Index, and other measures of article citation in scholarly journals has led to altmetrics’ quantification of an article’s impact in social media outlets—e.g., Twitter, blogs, newspapers, magazines, and bookmarking systems. But discussions regarding alternative reward systems do not generally swerve from this genuflecting of the scientific paper. Consequently, publishing big, “positive” findings in high-impact journals (and now in social media) is the researcher’s Holy grail.
One unfortunate corollary of this doctrine is undue pressure to bury negative results and to cherry-pick data that prove one’s hypothesis. Another is the fear that unpublished findings will be scooped by competitors, neutering collaborative projects where multidisciplinary teams and access to large datasets are required.
Compounding the problem, when true collaborations do occur: only the first author and last author of these multi-author papers are considered paramount to the findings. Data curation, software development, statistical analyses, unique methodologies, and other key contributions often remain unrewarded within “middle author” attributions.
Amy Brand, Liz Allen and others (sorry, middle authors!) have taken the lead on crucial first steps to address this corollary. Their paper “Beyond authorship: attribution, contribution, collaboration, and credit,” published last year in Learned Publishing, describes Project CRediT, which defines contributor roles in published research output in the sciences. The purpose of this taxonomy is to “provide transparency in contributions to scholarly published work, to enable improved systems of attribution, credit, and accountability.”
This philosophy provides a framework for Rapid Science, a Brooklyn-based nonprofit I founded, to move forward with a reward metric that scores researchers’ effectiveness, initially on funded, collaborative projects. Our Collaboration Score – or “C-Score” – will provide a meaningful measure of each participant’s contributions to projects that require robust group involvement.
As an initial step, based on grants from the National Institutes of Health (NIH) Common Fund, The Andrew W. Mellon Foundation, and pharmaceutical companies, we’re developing social media tools that let scientists post and discuss the results of collaborative research. Information will be transparent, archived and searchable on the ‘Rapid Learning’ platform, and incremental findings can be published in the format of posters or case reports in a peer-reviewed, open access extension of the platform. Even before publication, scientists who interact and discuss projects on the site might be considered as contributors to “organic peer review,” helping to strengthen and provide additional context for the research as it develops.
Activity in these pilot communities will permit the development of the C-Score, based on individuals’ levels of sharing, commenting, analyzing, replicating, peer reviewing, authoring, and other activities that lead to project solutions and their communication via open access publication. These activities currently occupy much of a researcher’s time in team efforts but are largely unrewarded when reflected in multiauthor contributions.
Development of the metric and rankings will be informed by the study of successful communities that rely on reputation systems to promote quality and collaboration – e.g., eBay, Wikipedia, Rap Genius, Reddit, and Stack Overflow. These sites employ algorithms that incorporate peer insight, social interaction, and more complex computational formulas such as longevity of text contributions. But existing online communities do not have the same high stakes as seen in biomedical research, so creating the C-Score will require new methodologies devised (collaboratively!) by specialists from multiple disciplines.
Given the propensity of any metric system for gaming, bias, and inaccuracies, measures of overall project success will also be formulated to ameliorate these factors. In addition, the ability to drill down to view specific contributions, building upon CRediT’s taxonomy, will be invaluable to funders and administrators in determining scientists’ effectiveness in problem solving and playing well with others.
Sarah Greene is based in Brooklyn, New York. She previously worked with co-founder Ivan Oransky at Praxis Press, and editor Alison McCook at The Scientist.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.
A kind suggestion to Brand et al., as possible ideas of mine to consider when working on this new project:
Teixeira da Silva, J.A. (2013) The Global Science Factor v. 1.1: a new system for measuring and quantifying quality in science. The Asian and Australasian Journal of Plant Science and Biotechnology 7(Special Issue 1): 92-101.
http://www.globalsciencebooks.info/Online/GSBOnline/images/2013/AAJPSB_7(SI1)/AAJPSB_7(SI1)92-101o.pdf
Jaime, I will forward this to Amy Brand et al., and appreciate knowing about your extensive work on the subject. Impressive, and certainly we’ll look at this carefully as we strategize the C-Score.
There’s clearly a great deal of work to be done, but it looks very promising to me.