Retraction Watch

Tracking retractions as a window into the scientific process

“If you think it’s rude to ask to look at your co-authors’ data, you’re not doing science”: Guest post

with 22 comments

Last month, the community was shaken when a major study on gay marriage in Science was retracted following questions on its funding, data, and methodology. The senior author, Donald Green, made it clear he was not privy to many details of the paper — which raised some questions for C. K. Gunsalus, director of the National Center for Professional and Research Ethics, and Drummond Rennie, a former deputy editor at JAMA. We are pleased to present their guest post, about how co-authors can carry out their responsibilities to each other and the community.

C. K. Gunsalus

C. K. Gunsalus

Just about everyone understands that even careful and meticulous people can be taken in by a smart, committed liar. What’s harder to understand is when a professional is fooled by lies that would have been prevented or caught by adhering to community norms and honoring one’s role and responsibilities in the scientific ecosystem.

Take the recent, sad controversy surrounding the now-retracted gay marriage study. We were struck by comments in the press by the co-author, Donald P. Green, on why he had not seen the primary data in his collaboration with first author Michael LaCour, nor known anything substantive about its funding. Green is the more senior scholar of the pair, the one with the established name whose participation helped provide credibility to the endeavor.

The New York Times quoted Green on May 25 as saying: “It’s a very delicate situation when a senior scientist makes a move to look at a junior scientist’s data set.”

Really?

This comment is perplexing in many respects, not least of which is the plain language of the Submission Requirements and Conditions of Acceptance of Science, the journal in which the study was published. Upon acceptance, Science requires each author personally to attest to its terms. The terms include, prominently, the following statement:

The senior author from each group is required to have examined the raw data their group has produced.

The reason for it is obvious. The reader is happy to give credit for good work, but neither the peer reviewers, nor the editors, nor the readers were there as witnesses, so it is up to the authors to certify what took place. Co-authors are the only ones who can do this. This is fundamental to the compact with the reader that all co-authors accept. Yet we know that Green agreed to be the second (of two) authors of a manuscript based on original data he had never seen or asked to see, though he then signed a testament that he had done so.

If the co-author is not standing for the integrity of the paper, who is? Whom shall the reader trust?

WHO DID WHAT?

The rules set forth by Science go farther than requiring examination of data. They also include the following:

All authors of accepted manuscripts are required to affirm and explain their contribution to the manuscript.…

These words, as with most regulation, rest on the rubble of past fights, which in this case are fallen publications over decades. Soman/Felig was one of the earliest, and Duke/Nevins/Potti one of the latest. A necessary common factor in such debacles is a set of co-authors who put their names on work they know little or nothing about. We’re all clear on the benefits of authorship; some get a little hazier on the corollary responsibilities.

In 2001, Science (and Nature) were both badly burned by having to retract half a dozen papers (each) from a young researcher at Bell Laboratories, Hendrik Schoen. An investigating panel spent a great deal of time concluding that Schoen had committed scientific misconduct, but that his colleagues were guiltless.

Some clinical journals began in 1996 introducing a system of disclosing authorial contributions to the reader. One of us (DR) wrote to Science in 2002 to suggest that if Science had such a system in place, the whole Schoen investigation would have been unnecessary because the co-authors who were found innocent would have declared that they contributed nothing.

Change finally came to Science only later, after its editors in 2005 had found themselves having to retract several studies by the fraudulent South Korean stem cell researcher Hwang; an internal investigation of the journal called for a declaration of contributions in the way other journals had, by then, required for some time.

Given this history and the requirement, how do we understand what, exactly, were Green’s contributions to this work? We cannot find a public version of the contributions statement in the journal Science, nor in the supplementary materials. Merely disclosing to the editor is insufficient. Of what value are these statements if the reader is not able to understand the role of each author?

AND WHAT ABOUT THE FUNDING?

In this particular case, the funding was another matter that the senior researcher was quoted as characterizing as too delicate to raise. Green acknowledged that he wasn’t aware that the funding referenced in the article didn’t exist:

Michael said he had hundreds of thousands of funding, and yes, in retrospect, I could have asked about that. But it’s a delicate matter to ask another scholar the exact method through which they’re paying for their work.

He has got to be kidding.

Both the authors are political scientists and must surely have realized how carefully their results would be scrutinized. One can easily imagine the existence of funding that could bias the results in this area.

Indeed, the instructions in Science about funding and conflict of interest provide:

Authors must agree to disclose all affiliations, funding sources, and financial or management relationships that could be perceived as potential sources of bias, as defined by Science’s conflict of interest policy.

Exactly how does the senior author imagine he can fulfill this requirement if asking his co-author about funding is too delicate a matter?

Unfortunately, it now appears that little of what was printed about the funding sources can be substantiated. Too bad that discussing—and accurately documenting—that wasn’t a part of the collaboration process.

WE ARE NOT THAT BUSY

A commonly raised objection to honoring — as opposed to just signing — co-authorship requirements is that everyone is so busy, and interdisciplinary collaborations are so complicated in these modern times that it is not reasonable to ask coauthors to examine data they will not understand anyway, or to offend others by asking about funding. This overlooks — or willfully ignores — some critical points.

Attesting to Science’s requirements of authors may seem a tedious chore. But the requirements only outline the simple ethical rules the profession of science has adopted governing how we must behave towards those to whom we are responsible: our colleagues, our profession, our readers, and the public. None of this shambles would have happened if those involved had understood the reason such openness and honesty is so crucial, and had followed these rules. (Yes, the reward system and incentives often are weighted to actions that undercut the fundamental mission and purpose of our entire enterprise. Yes, systemic fixes are necessary. None of that lets us off as individuals.)

Of course, everyone is busy. But the busier you are, the more critical these habits and best practices become. In fact, when thoughtfully adopted, these practices will protect you, much like the rules of the road. Just because you’re in a rush, if you drive on the other side of the road into the oncoming traffic, you put yourself and others in jeopardy.

The complexity of collaboration and cross-disciplinary expertise is not new. Researchers have been raising this issue for decades. That is precisely why the contributions system was introduced, to provide accountability for members of interdisciplinary teams to specify their areas of expertise and who did what.

At a larger level, our question is this: if the coauthors cannot understand the data, how can the reader? Isn’t it the duty of those on the collaboration to understand it well enough to stand behind it?

WITH POWER, COMES RESPONSIBILITY

We don’t wish to suggest that it is easy to fulfill the obligations of a co-author or mentor. In reality, given the complexities of interpersonal relationships and the trust science requires, it can be difficult and daunting. Still, it’s the job.

Nobel Laureate David Baltimore endured a decade of questions about his impulse to defend a collaborator—without ever reviewing any data—when presented with questions about work on which he was senior author. Yet even as late as 2002, he said:

In science, we assume that a colleague is trustworthy and only in extreme do we doubt it.

The reality is that a fundamental betrayal of the trust of collaborators is often at the root of a scientific fraud. The roster of saddened senior scientists who had have been profoundly betrayed include some truly great scientists:  Efraim Racker by a protégé he called “the son I never had”, and Francis Collins, who said of a fraud in his lab:

It was my darkest professional hour when I found out that a talented student who I had great hopes for systematically manipulated data. It changed me forever.

It’s a conundrum: in a community based on trust, how do you find a reasonable balance of trust and skepticism? Where are the lines?

The crux of the matter is that, as human beings, we are prone to self-deception and are poor judges of character and credibility. We all too often see what we want to see and hear what reinforces our existing beliefs. We are influenced (often without realizing it) by those around us. Remember the old saying “if it sounds too good to be true, it probably is?” Someone who approaches you to join in publishing results that correspond to your own theories—and provides ample funding to support the work—should trigger the idea that this is precisely the time you need a set of collaboration hygiene habits and use of good practices. Doing so will protect you, and each and every participant in the collaboration, as well as the scientific process.

Of course, knowing what you should do, and knowing how to do it are sometimes two different things. If you are a senior researcher and wish to do the right thing, what should you do if you are approached because your participation (read: reputation) is a desirable addition? If you begin to have questions about data with which your name will be associated, how do you raise them without seeming to trample on your juniors?

Here are some possible approaches.

  • Start As You Mean to Go On

Set up the collaboration with clear understandings and protocols about contributions, roles, and access to data. This is to clarify who does what. It can be done matter-of-factly, without any heat or malice. If you set up the ground rules and expectations from the beginning, asking questions will not be “rude” or intrusive; it will simply be a necessary part of doing good science.

It gets increasingly difficult to ask questions once a whole set of assumptions have been made (and confirmed implicitly by actions). Think about advice you’ve likely received about teaching: Start out strictly enforcing rules and relax as you go, because it’s almost impossible to tighten them later.

There are some useful resources that can be consulted for developing a set of questions or personal policies for how to go about establishing collaborations.  If you’ve come across something particularly useful—and practical—please do share it with us, as well as your experience implementing the ideas.

One way to do this is to establish, up front, a process to specify each person’s contributions – perhaps using guidelines in your field or from the journal to which you hope to submit. Then, make it clear that you’ll revisit this discussion at the end, after finishing a manuscript, to clarify who did what.

This conversation can be easier than you think. You can say something like:

So we don’t have any miscommunication, let’s go through what we plan to do and assign each of our efforts to one of the categories from the International Committee of Medical Journal Editors and discuss. We’ll circle back at the end and revise according to how things develop over time to reflect what we planned to do and what actually happened.

or

It’s my policy to ask all those with whom I collaborate to….[fill in the blank]

You may be pleasantly surprised how easy this can make an otherwise awkward process.

  • Institute and Implement Good Practices—Build in Your Own Checks and Balances from “Go”

Build practices, appropriate to the discipline, for open sharing of data, and a system to raise and address questions. Example:

-Timelines and expectations

-Where data are kept and in what form. After publication, who takes the lead in responding to queries about the data

-Lab practices for replication/repetition of work before manuscripts are submitted

-Who has access to raw data and/or notebooks, and under what conditions

-What documentation standards are expected for code and where programs are kept; who has access

-What requirements for the research are imposed by funder or institutional policies in terms of the research plan, approvals, intellectual property, etc.

One extremely valuable practice is one that comes from the management literature, in a classic article by Peter Drucker, Managing Oneself. Drucker suggests having what he calls a “contributions conversation” that involves saying, as you begin each new working relationship:

This is what I am good at. This is how I work. These are my values. This is the contribution I plan to concentrate on and the results I should be expected to deliver.

And then continue by asking:

And what do I need to know about your strengths, how you perform, your values, and your proposed contribution?

To this, one of us (CKG) teaches the value of adding one more component:

This is how I would like you to approach me if you have a problem with something I’ve done. How would you like me to approach you if I have a concern with your work?

Some people like to talk things through, others prefer to read and mull before talking. Knowing the most constructive way to raise an issue is both helpful, and it signals that, of course we’ll have difficulties and we’ll approach them like the grown-up professionals that we are. (Or aspire to be.)

Human beings have misunderstandings, frustrations, and friction. If you acknowledge that from the first, and develop between you a clear understanding about how you each will handle such a situation, that will help reduce awkwardness as well.

  • Develop Personal Scripts

No one ever expects to be named in the New York Times as the co-author of a manipulated study. Not Donald Green and not Francis Collins. This makes it imperative to have on hand some words (your personal scripts) for interactions from which you might otherwise cringe.

Here are some possible examples:

Lets look at our project as a reviewer might. For each conclusion in our paper, where is the support in our data?

If we are questioned, where are the primary data kept and who knows how to interpret them, or our code/protocols/methods? How the data got to become conclusions?

Our protocol of sharing processed data is efficient and saves time. To fulfill my obligations to you and the partnership, there may be times when I need to look at raw data to think this through. Where can I find them and the accompanying documentation, even at off-hours?

It’s our practice to keep a complete file of foundational documents. Please send the IRB /IACUC/funding/whatever documents for our documentation file as we begin our process.

The journal is requiring me to certify that I’ve seen the raw data. What process will we implement that so we can sign our affirmations honestly?

And what about if you have concerns about data? The first, most important, most effective rule is to start by asking questions, in a non-confrontational way. It’s all too easy to be confused or be misunderstood. Only escalate if the answers are not satisfactory or seem contradictory.

These data confuse me, as they seem not to include as many data points as we had discussed. Can you help me understand where I’m getting confused?

I thought we’d agreed all the data would be deposited by now. Is there a technical hangup or something else keeping this from happening?

  • Designate Contributions for All Projects

Make it one of your personal professional policies to always clearly designate contributions of all participants for each and every manuscript.

Set out your criteria for authorship at the beginning, with examples, and then revisit at the end when the work is done. Set aside time to go through the criteria from the International Committee of Medical Journal Editors (ICJME) or the journal to which you plan to submit, and assign the contribution of each participant into one or more categories. Again, if you have indicated (in writing) at the beginning of the collaboration that this is your policy or practice (matter-of-factly, just the price of admission), it will go more smoothly.

HOW COULD ALL THIS HAVE HELPED THE GAY MARRIAGE STUDY?

As a thought experiment, consider this: Imagine that Green had said his routine policy for collaborating was that he be provided with: a) a copy of the approved IRB protocol; b) a full research design including the survey instrument, procedures and analysis protocols, including information about the survey companies doing the survey; c) a copy of the funding agreement so he could assure he was fulfilling his obligations as a collaborator; d) periodic check-ins to discuss the progress of the project; and e) access to the raw data before manuscript submission. How do you think the events that have absorbed so much attention might have played out differently?

We do not know if Green did all of these things. If he did, we are perplexed about how the paper got as far as it did. But such policies provide more opportunities to discover the flaws in any study earlier. At the most, this policy might deter problematic conduct.

Bottom line: if you think it’s rude to ask to look at your co-authors’ data, you’re not doing science.

What is your good name worth to you?

Drummond Rennie is an Adjunct Professor of Medicine, the Phillip R. Lee Institute for Health Policy Studies, University of California San Francisco and former deputy editor, west, JAMA. C. K. Gunsalus is director of the National Center for Professional and Research Ethics (http://www.nationalethicscenter.org), research professor, Coordinated Science Laboratory, professor emerita, College of Business at the University of Illinois at Urbana-Champaign. She also runs a consulting practice. 

Contributions: Gunsalus wrote an initial draft. Rennie discussed and rewrote every part of each successive draft and idea collaboratively with Gunsalus, and provided additional insights and references. Each approves and takes full responsibility for this final version. The authors received no funding for this article.

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

Written by Alison McCook

June 18th, 2015 at 11:30 am

Comments
  • Dr. Acula June 18, 2015 at 11:50 am

    I think more universities need to start taking the same steps as that of the alma mater of the researcher at Bell Laboratories, Hendrik Schoen, cited above, where they rescinded his PhD. due to his faking data. It’s doubtful it will happen though since most universities seem to be more worried about having a tarnished (in their view) image than promoting quality education and research.

  • Query + complaint June 18, 2015 at 12:10 pm

    This whole mess could be easily clarified in two steps:
    a) All original data should be deposited in an institutional repository. The research institute should have the maximum responsibility of ensuring that primary and raw data is deposited at least once a year, with checks. I think this is the first step of failure.
    b) Any manuscript submitted to any journal should be accompanied with the files of the raw data. Point.

    The fact that both institutes and journals/publishers have failed in this key check, and continue to fail, is one reason why those who manipulate data, continue to do so: because there are no checks (in most cases). Only when red flags are raised is raw data requested. The system continues to operate on blind faith.

    So, when will the main-stream publishers change this course of action and introduce compulsory raw data submission? (rhetorical question: IMO, this will likely never happen, because they might find that many/most of the papers submitted to their journals probably have some data problems).

    • Simon George June 19, 2015 at 3:56 am

      While there is a case for institutional archiving of data (together with many logistical problems to do with indexing, confidentiality, recording the data format and so forth), there are problems with submitting the raw data to the Journal. In any case, what is the Journal going to do with it? Raw data sets can reach gigabyte sizes. They are often incomprehensible except to an expert, and may well require specialized software to reduce it to an understandable size and form. Perhaps it is sufficient to have the Institution confirm that the raw data actually exists?

      Also, if the data involves human subjects, it may well require robust anonymization, which can be non-trivial.

      • Brian June 19, 2015 at 7:12 am

        The raw data for my last paper was just shy of four terabytes. I assume I’m print it out and send them a convoy of eighteen wheelers.

  • Masked Avenger June 18, 2015 at 1:00 pm

    just my opinion: senior co-authors take the work and trust and honesty for granted because it’s expedient to do so. Yes, it’s probably a dereliction of duty to simply trust, but trust makes things faster and easier. And, come on, everyone like a big-deal hot-shit story and nobody wants to think that amazing stories aren’t true. Moreso when your name is on a hot-shit paper and you didn’t need to do any work for it.

    Bottom line: Complacency = complicity.

    All involved WANT to believe.

    When granting agencies, journals, and departmental committees ask for the impossible, people will crawl out of the woodwork to give them the impossible, and nobody wants to be troubled with the limitations of working in a reality-based paradigm.

  • Mathew June 18, 2015 at 4:03 pm

    There should be some surprise inspection visits from NIH or some regulatory authorities to check the raw data similar to FDA inspection of biopharmaceutical industries. If NIH finds major discrepancies during their inspection then they should rescind the funding of that lab. If NIH implement this policy then Professors as well as Universities will take major step to check the accuracy of data and prevent scientific frauds. They will implement the QC procedures to check the data accuracy and avoid the findings in NIH inspections.

  • Miguel Roig June 18, 2015 at 6:25 pm

    I am a psychologist by training and most of my collaborations have been with students who have worked under my direct supervision. Although I have never had a problem with fraudulent data (that I am aware of!), I once did have a very minor incident with a student regarding authorship assignment. As a result, I created a faculty-student research contract, http://teachpsych.org/resources/Documents/otrp/resources/mr07research.pdf, in part to prevent potential authorship and data ownership problems with my students and also as a means of teaching basic elements of responsible research conduct (please pardon any appearance of self-promotion).

    I have also collaborated with colleagues and thought of altering the above document for purposes of formalizing these professional collaborations. However, although I always make sure that we all discuss key issues of our work, such as authorship and data custody I never developed a professional version of the document because, frankly, I thought it would convey an element of distrust and be potentially disruptive to our professional relationships. It is in this context that my initial reaction to the authors’ conclusion that “if you think it’s rude to ask to look at your co-authors’ data, you’re not doing science” was that it was plainly objectionable and I was ready to formulate a reply as to why I thought so. But, as I was reflecting on how I might respond, one of Bob Dylan’s songs popped into my head and my attention veered to a segment of its lyrics:

    “The line it is drawn
    The curse it is cast
    The slow one now
    Will later be fast
    As the present now
    Will later be past
    Your old road is
    Rapidly fadin’.
    And the first one now
    Will later be last
    For the times they are a-changin’.”

    Yes, maybe the times are, in fact, a-changin’ and we need more of a ‘trust but verify’ approach to scientific collaborations.

  • oldnuke June 18, 2015 at 6:33 pm

    If you’re going to archive the data, how about archiving the SAS/R/SPSS/what_have_you code that was used? I can tell you from great personal experience that data without extensive documentation of the format is useless.

    I can’t tell you how many 7-track magnetic tapes we scratched during the Y2K testing at work – we had NOTHING to read 7-track tapes with and NO IDEA of the format of the data archived. Yet they were all coded for indefinite retention! PC users — how many diskette drives do you see today. See what I mean?

    The Digital Curmudgeon, Recipient of the Been There Done That Prize. 🙂

  • ckc (not kc) June 18, 2015 at 11:24 pm

    …interdisciplinary collaborations are so complicated in these modern times that it is not reasonable to ask coauthors to examine data they will not understand anyway,…

    If you cannot understand the data, you should not be a co-author.

    • Simon George June 19, 2015 at 3:34 am

      “If you cannot understand the data, you should not be a co-author.”

      Not so. There is a big difference between understanding the RESULTS and understanding the DATA. It is reasonable to ask all authors to understand the results, but not the raw data.

      Consider the case of a simple protein structure study. These are often the result of an interdisciplinary collaboration between a X-ray crystallography lab and a microbiology/genetics lab. While it is reasonable to expect that the microbiologists can understand the protein structure and concomitant conclusions (the results), it is not reasonable to expect them to be able to interpret X-ray diffraction patterns (the raw data). Similarly you it is not reasonable to expect the crystallographer to possess microbiological and genetic manipulation expertise to the point where they can detect deceit.

      • genetics June 23, 2015 at 6:16 am

        Thanks Simon, as a geneticist, I was about to post the same when I found your reply.
        The geneticist is doing genetic studies and detects a mutation, the X-ray crystallographer adds data to structural changes of the protein. The geneticist studied biology and the crystallographer studied physics or physical chemistry. Both sides are going to have a very hard time going into the depth of the raw data of the cooperating partner. Thats basically the reason why you let outside labs add some interesting work, because you yourself are not able to do it.

        If you want to have interdisciplinary research and interdisciplinary papers, one will have to accept that at least some raw data is not fully understood by all co-authors down to the same level.

  • Leonid Schneider June 19, 2015 at 4:54 am

    Very valuable contribution, but it operates on a premise that scientists do research in order to advance knowledge. I believe that most are actually in the business of publishing papers and obtaining funding, this is why such issues as reproducibility, data reliability, conflicts of interests are really just seen as obstacles. The focus in academic research is on advancing scientists, not science. It is like investment banking, which is also not really there to increase the communal wealth, but to benefit its individual investors.

    • Guido Berens June 19, 2015 at 10:41 am

      I don’t agree. Unlike for investment bankers, I don’t think money or fame are a primary motive for anyone to become a scientist. Publications and funding are merely a means to build and maintain a career as a scientist. Why would anyone care about obtaining research funding as an end in itself, or about having one’s name in a journal as an end in itself (a journal that is only read by colleagues working in the same field)? Granted, increasing pressure in the form of decreased government funding etc. may result in putting undue emphasis on these means. But I don’t think scientists as a profession completely lost sight on the ends that they strive to achieve.

      • Leonid Schneider June 23, 2015 at 1:09 pm

        Guido, humans generally strive for domination inside a group (not everyone can make it to a dictator of a state). So it is not about money, but about power, status, fame, etc, all inside your own self-chosen community of researchers. You cannot impress you scientists peers with a big house or a fancy car, but with your huge lab and huge papers.

  • Brian June 19, 2015 at 7:06 am

    Bottom line: if you think it’s rude to ask to look at your co-authors’ data, you’re not doing science.

    Like, we’re not stupid. “Hey Sarah, can I have the simulation outputs to check for Toomre’s instability and see if I can do anything with it?” is not a rude question. “Hey Sarah, I think you’ve been faking the data, so can I have to it to check for that?” is a rude question.

    But no, I don’t like the idea of having to prep terabytes of data (my papers have 1-10 Tb of raw data) into a format everyone can read, re-write all my analyse software so anyone can compile, run, and understand it, and deal with some huge university bureaucracy to check I’ve done it. The bureaucracy to check our papers are open-access is a huge time sink, and every paper in my field has been open access for ~10 years.

    • JATdS June 19, 2015 at 2:47 pm

      Brian, I’m glad that you brought up that point. I have sounded the alarm and raised a red flag on open access files*, usually available as freely available supplementary files. The risks have not yet been assessed, and I get a gut feeling that this may be the next source of scientific fraud (if it’s not already…).

      * Teixeira da Silva, J. A. and Dobránszki, J. (2015), Do open access data files represent an academic risk?. Journal of the Association for Information Science and Technology. doi: 10.1002/asi.23557
      http://onlinelibrary.wiley.com/doi/10.1002/asi.23557/references

  • Miguel Roig June 19, 2015 at 10:45 am

    Leonid Schneider
    I believe that most are actually in the business of publishing papers and obtaining funding, this is why such issues as reproducibility, data reliability, conflicts of interests are really just seen as obstacles. The focus in academic research is on advancing scientists, not science.

    Leonid, my sense is that most of us will engage in some degree of ‘personal advancement’ from time to time, but I sure hope that genuine scientific curiosity and the search for truth is what continues to drive scientific research. On the other hand, I bet personal advancement is precisely what is behind many instances of questionable research practices and of outright misconduct.

  • Alan R. Price June 19, 2015 at 1:58 pm

    Ah, so well said, Tina and Drummond, thou good and faithful servants of ethics in science, and dear old friends and colleagues. who never fail to tell it like it is, with no nonsense. Great.

  • Oliver C. Schultheiss June 21, 2015 at 8:47 am

    Complete transparency between co-authors can also be used to everyone’s advantage in the following way: ask your coauthors to take the data set and redo all the analyses reported in the paper from scratch. If they get the exact same findings that are reported in the paper, then obviously the analytical approach was valid and is sufficiently well-documented in the paper. If their findings diverge, then either the analyses are insufficiently documented or worse, either their or your own analytical approach was flawed. The latter may happen more frequently than we would like (and rarely intentionally) simply due to the complexity of some data processing and analysis steps. All the more reason to verify the soundness of the statistical analyses by asking some or all coauthors to redo the analyses on their own and check whether everything pans out before the paper is even submitted.

  • Dave Fernig June 23, 2015 at 11:42 am

    One can grasp the data, but one may not have a real understanding of the measurement, its theoretical basis and its limitations. For example, in an experimental high energy physics paper, no single author is capable of really understanding all facets of the work, from the ring, to the detectors and so on. However, I do agree that the senior author had to take the time to gain a grasp of the measurements. This does take time and, of course, is singularly “unproductive” :).
    I do not think the issue is particularly problematic in much of life sciences, but it is at the interface of life sciences and physical sciences.

  • Stefanie June 28, 2015 at 9:21 am

    The gist of the comment seems to be that everyone belonging to the author team and especially the senior researcher MUST be responsible for errors or fraud committed by more junior personnel. Why? My PhD students are 25+ years old — is that too young to be responsible for proper scientific conduct? For how long would you like me to oversee someone’s work? — Up to the PhD or also during their post-doc time, or until the end of my professional life? My own approach is to review all data analyses for the first and maybe second paper, and then to stop doing that when I am confident that a student is perfectly capable of doing the work error-free. As a supervisor it’s my duty to teach students about proper scientific conduct (and I do), but checking every bit of their work to make sure they adhere to it is a bit over the top – after all, these ‘juniors’ are pretty grown up.

    • Lee Rudolph June 28, 2015 at 1:36 pm

      For how long would you like me to oversee someone’s work? — Up to the PhD or also during their post-doc time, or until the end of my professional life?

      Without going so far as to include spoilers, I can say that precisely those questions are the basis of Isaac Asimov’s first mystery novel, originally published as The Death Dealers and reprinted with his original (and preferred) title A Whiff of Death. The matrix in which the mystery story is embedded is a sort of coming-of-age, write-what-you-know roman a clef; about a man coming up for long-delayed tenure in a post-WWII chemistry department at a barely-disguised Boston University. I only recently read it for the first time; it’s a much better novel than it has any right to be, and though Asimov’s reflections on your questions aren’t particularly deep, they’re serious, and not something I’ve seen in other fiction (not that I read a lot of the kind of fiction where they might show up; for instance, I haven’t read The Mind-Body Problem, another roman a clef;, in this case about the Princeton philosophy department, but I wouldn’t be surprised if they show up there).

  • Post a comment

    Threaded commenting powered by interconnect/it code.