We have seen plenty of projects unravel due to disputes over authorship, so we know this is a crucial issue in publishing. And the more authors are involved, the more issues can arise. So what happens when there are hundreds – or even thousands of authors on a single paper? Spencer Klein, a senior scientist at Lawrence Berkeley National Laboratory and a Research Physicist at the University of California, Berkeley, offers some suggestions for how mega-collaborations could think differently about authorship.
Over the past few years, Retraction Watch has hosted a number of interesting discussions about the meaning of authorship. Those discussions have, so far, missed one important issue: What should one do in mega-collaborations, with memberships the size of a large village? In my field (astro/nuclear/particle physics), papers with hundreds of authors are common, with recent papers by the ATLAS and CMS collaborations, the two large experiments at CERN’s Large Hadron Collider, having 2,870 and 2,270 authors respectively. One 2015 joint paper appears to have broken an authorship record with more than 5100 authors. (It’s also an increasing issue in other fields, such as genetics – one 2015 paper listed 1,000 authors.)
The usual techniques for assembling author lists fail here; a 2,500-person negotiation is a non-starter. Instead, authorship is determined by a set of criteria based on time in the collaboration and/or ‘service work’ – jobs like hardware upgrades, detector calibration, data-taking shifts, and the like, overseen by a hierarchy of institutional leads and, for the largest collaborations, national leads. People join the author list after meeting these criteria, and usually stay on until a certain amount of time (typically six months or one year) after they leave the collaboration. Authors are listed in alphabetical order; G. Aad is a prolific first author.
Everyone is an author on every paper, whether or not their specialized work contributed to that paper. Someone who worked on muon detection, for example, would be listed as an author even on papers that do not involve muons. There is no requirement that the individual authors have even read the paper, much less contributed to the writing.
Papers are actually written by a small group of authors or a committee, and internally reviewed by multiple committees. Individual authors typically have one opportunity to comment on the manuscript before submission to a journal. In some collaborations, individual comments are common; in others they are not encouraged. This is all governed by a collaboration governing document – a kind of membership agreement between the participating institutions. This is very different from authorship in a small group. There are also questions from colleagues, particularly for hiring and tenure decisions: “What did s/he actually do for all these papers?”
As collaboration size grows, these problems are becoming more common in other scientific fields, and it may be time to think again about the meaning of authorship. Does it make sense to have 2,500 authors on a single paper? Not everyone is completely comfortable with current procedures, but there are strong pressures to maintain it – simplicity, academic pressure to publish, and the perception that this is the best way to give appropriate credit to people who built and calibrated the hardware and collected the data, but who may not have been involved in the actual analysis. Although my main purpose in writing this is to begin a discussion, I would like to offer a few modest suggestions:
- Large collaborations should encourage their members to be more fully engaged with the collaboration’s scientific output. The Fermi Large Area Telescope (LAT) Collaboration, for example, requires that all members affirm their authorship on each paper. When each paper is announced, members must visit that paper’s website, and explicitly “opt in” to the author list. To Retraction Watch readers, this will sound like a minimal step — but, in mega-collaborations, it is questionable if a majority of the members of the author list read most of the papers with their names on them.
- We should try to slow the growth in collaboration size. One consequence of the enormous time-scales for large experiments is pressure to be involved in multiple efforts – one project in construction, another taking data, and one or more in the final analysis stage. This is exacerbated by funding pressure – more projects means more money, even as individual roles grow smaller and smaller and the overhead of multiple meetings and frequent task switching reduces efficiency. Individual scientists, large collaborations and funding agencies should encourage people work on fewer projects, but more intensively on each.
- In the long run, we need to find a better way of assigning credit. This was a topic of a 2012 workshop held at Harvard in 2012 (you can read the report here.) Someone who spends a decade building a truly beautiful piece of scientific apparatus deserves recognition, including tenure, promotions, invited talks, scientific awards, etc. It is not clear that authorship on a science paper on a topic they may have little interest in, and/or may only poorly understand, is appropriate recognition. We need to find a better way to recognize the achievements of detector designers, calibration gurus, software experts and the like, rather than listing every single contributor as an author. Dividing the author lists to designate individual contributions — as was suggested here by K. Gunsalus and Drummond Rennie, and by NEJM editor Jeffrey Drazen (who proposed a separate author category of people responsible for producing data), and also in a Nature Comment, for example — would be step in the right direction.
Klein’s opinions expressed here are his own, and not necessarily shared by Lawrence Berkeley National Laboratory or the University of California, Berkeley. He wishes to thank Justin Vandenbroucke (UW-Madison) for discussions on Fermi LAT author policies. Klein has varied physics interests, and maintains a blog, http://antarcticaneutrinos.blogspot.com/.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.