The week at Retraction Watch featured a marriage proposal tucked into a paper’s acknowledgements section, the retraction of a controversial Science advice column, and The New York Times pushing for more focus and funding on research misconduct. Here’s what was happening elsewhere:
- “My professor demands to be listed as an author on many of my papers,” an anonymous academic writes in The Guardian.
- “Researchers should not overvalue or fetishize scholarly metrics,” writes Jeffrey Beall in The Journal of Physical Chemistry Letters.
- An economist who chooses what gets presented when at a major meeting admits that his decisions are based on what will be picked up by the press.
- “There is evidence on peer review, but few scientists and scientific editors seem to know of it,” writes Richard Smith, “and what it shows is that the process has little if any benefit and lots of flaws.” (Smith is a member of the board of directors of The Center For Scientific Integrity, our parent organization.)
- Don’t count on your publications to take care of you when you get older, says the editor of Perspectives on Psychological Science.
- “Want to be ethical in science?” asks Lenny Teytelman. “Speak up.”
- The Reproducibility Initiative: Cancer Biology has had trouble accessing original study data, William Gunn said at the World Conference on Research Integrity in Rio. (Disclosure: Ivan chaired the session described in the report.)
- “Who’s to blame when fake science gets published?” asks Charles Seife.
- “Tricked: The Ethical Slipperiness of Hoaxes.” Hilda Bastian weighs in on the chocolate sting study. (Bild has retracted their coverage.)
- Joel West is puzzled by a new editorial written by Ulrich Lichtenthaler, who has retracted 16 papers.
- Brian Nosek tweeted that “The biggest impediment to research progress is not fraud, it is all scientists reading about fraud.” Andrew Gelman unpacks that comment.
- An argument over possible drug-resistant malaria in Angola — featuring a call for retraction from the WHO — seems to have been concluded with an exchange of letters, reports Robert Fortner.
- “Plagiarism is the most common cause of retractions in BioMed Central journals,” Kerry Grens reports from the WCRI.
- Amgen has subpoenaed a journalist from The Cancer Letter “whose stories eight years ago revealed negative clinical trial results for a best-selling product,” Ed Silverman reports at Pharmalot.
- Here’s how a newspaper became a science journal.
- Mark your calendars: The next Peer Review Congress will be in Chicago from September 10-12, 2017.
- The NIH has stopped operations at its drug development unit after finding “serious manufacturing problems” and contamination of drug lots.
- “The public trusts scientists much more than scientists think,” writes Nature. “But should it?”
- “Has science ‘taken a turn towards darkness?‘” asks Steven Corneliussen.
- The Stanford Daily has retracted a story because it was based on calendar year data, not academic year data.
- Betty Jones & Sisters: Jeffrey Beall reports on a new publisher with a strange name.
- Want to grow usage of your scientific paper? Tips from Jenny Peng, an editor at Wiley.
- “[Y]our scientific arguments should be able to speak for themselves, without needing to attack where, how, or who expressed the idea,” writes Brett Buttliere.
- Want to increase data sharing? Focus on reputation, rather than obligation, argue four researchers.
- “Science is the most democratic of human endeavours because, in principle, anyone can replicate a scientific discovery,” writes the editor of Evolutionary Ecology in an editorial about writing scientific papers (paywalled).
- Today’s university: Academic institution, or multinational corporation?
- Peter Edmonds discusses the correlation between retractions and impact factor.
- A New Jersey newspaper’s response to a judge’s order to “remove a news article” is “awesome,” says Xeni Jardin.
The “anonymous academic” who “writes in The Guardian” is incorrect in my opinion. I can’t think of many situations in science where a graduate student or post-doc SHOULD be an author and not include his major professor. The professor no doubt has provided grant support for the student, an environment and other resources to conduct the work and except in extremely rare cases the framework, experimental plan and guidance for the publication.
Using that logic, all publications based on dissertations should include the advisor and the members of the committee as co-authors. Those who guide and provide support are acknowledged.
Demanding to be a co-author implies that the person wants paper credit without providing any direct contribution to the research.
It is academic bullying. It is something that may get resolved by having the student talked to a trusted senior faculty member.
Any serious journal’s ethical guidelines ask that all coauthors have significantly contributed to the paper they sign, and fully endorse its contents. One may discuss what “significantly contributed” exactly means. I would say it should at least imply having provided important ideas for the research problem, and having spent significant time discussing the results. The article in The Guardian addresses cases where the professor/PI put their name on papers they had not even read. This strikes me as seriously at odds with the ethical guidelines.
I’m not sure the author is PhD student or postdoc. Sounds like he/she does run own research (with help of MSc students), but dependent on a professor for it.
I guess I would have to ask whether, without authorship credit, there is anything in it for the professor. I am sure many PhDs would love to have their own independent research program and let someone else pay the bills, but this isn’t reality.
My own view is that if the postdoc has his/her own fellowship, came up with all the ideas, and executed them without significant help, then they might ask to be the corresponding author on publications. However, if the postdoc has no means by which to do the work without the professor and his/her MS students, that postdoc is not independent of the professor. Unfortunately, it sounds more like this person just had a bad postdoctoral advisor that isn’t engaged in the work.
I think there may be a misunderstanding here. When someone from the UK says ‘my professor’ it often means that they are talking about the head of the department. ‘Professor’ is a senior title, not generally used for entry level faculty, as it is in the US. What we in the US might call ‘my PI’ or ‘my thesis advisor’ would be referred to as ‘my supervisor’ in the UK. I suspect that the Guardian article is written by a lower level PI (a ‘Demonstrator’ or ‘Lecturer’ in the UK), not by a grad student or postdoc. It is possible that the author of that piece is justified in his grouse.
I would agree, it really all depends on the individual situation and on the definition of “my professor” “principal investigator” etc.
In Germany, some extreme cases can be seen in clinical settings. There are plenty of cases of clinical departments at university hospitals that have large “basic science” research groups. Head of the whole department is a clinician, a Professor of Internal Medicine or whatever. So technically, this clinician qualifies as “my professor” for a lot of basic science researchers. But s/he clearly does not qualify as principal investigator in my opinion. For me, the PI is the one who had the main idea for the research and who is supervising it and puts the puzzle-parts together if more people do the actual bench work.
Nevertheless, these clinical department leaders quite often insist on being co-author on every basic research paper. If you find clinicians with a co-authorship every week, that’s how it works…..It has become less common these days, but in the old days they sometimes even insisted on being the corresponding author of such papers. That left the actual PI with first authorship and the one doing the actual work in the middle. I do hope that no one here finds these practices justifiable.
In a well lead basic reasearch department, the head of the department (“the professor”) also has her/his own research group. This is her/his own group where s/he acts as PI. Other groups with indipendent PIs exist, and the head of the department should only be on papers of these other groups if s/he actually contributed more than just the basic infrastructure of the department and a little smalltalk. Even department funding is no reason for co-authorship of the head of department IMHO. Funding is only an argument pro co-authorship, if the grant application has been written by the head of department, which would mean that the head of department depeloped the basic ideas.
I didn’t say just giving guidance (which a committee member is tasked to do) is enough to justify authorship.
Members of the dissertation committee usually do not provide grant support for the student or an environment and other resources to conduct the work. Nor do they provide a framework or experimental plan. The guidance provided by a committee member meeting with the student a few times a year rarely rises to that of an advisor. No, committee members rarely deserve authorship, and if they do it’s because of some direct contribution usually one of the items above.
On the other hand it is rare that all a major professor does is provide some nebulous guidance. It takes great effort to establish, direct, maintain and fund a viable research program.
I cannot speak for the student but for myself, I have been part of a number of “viable” research projects – in all, the members sit and discuss papers and authors arrangements before the work is done. The PI can choose to be on a paper or not. My last publication with that group did not include the PI as an author (his choice – it was outside of his field and he felt that he could not make a contribution).
In the US the Federal Govt does not give dissertation grants directly to the student. In my case, my grant was under my advisor’s name. He promised not to buy a sailboat with my grant.
Your speculation regarding the life of the student may be correct but there is nothing in the Guardian article to support your speculation. The article suggests that the senior researcher demanded to be added to multiple papers. How can a good senior researcher supervise a student and not be aware of the presence of multiple papers?
My suggestion to seek out help from a trusted senior faculty member seems like a sensible first step.
NIls, re: “The Guardian addresses cases where the professor/PI put their name on papers they had not even read.”
How does this even happen?
Student does a bunch of studies (using resources in the lab), writes a paper, submits without the professor or PI name on it, who then – on learning about the paper – insists that her name go on it?
That student would be out of my lab if something like that transpired.
rfg: We’ve seen plenty of instances on RW of articles being retracted, where the PI coauthor blames it all on the evil postdoc who manipulated the data without his knowledge. I’m not saying the student submits the paper without the professor’s knowledge. I’m saying the professor, because his group is so large, only gave the paper a very cursory look (not speaking of the fact that he pressured the student to write the paper for too close a deadline in the first place).
For the record, I have supervised several PhD theses, and in every paper I’ve cowritten with a student, I’ve carefully checked every line and every calculation. Often I’ve rewritten large parts of the MS to make it more intelligible. I’m sure a large proportion of PIs do the same, but certainly not all of them.
Re piece in The Guardian: Apart from a direct confrontation with the PI I see no reason not to include his/her name on the paper that transpired from PI’s lab. The guidelines for authorship that most journals provide implicitly seek to prevent misconduct – not to incite piece-meal war over the authors’ line-up. I’ll explain.
It’s not about who runs the lab or has the grants or the reputation that might help the manuscript to be published in a better journal. It’s about the science. People make contributions that may appear different in the scope of involvement or investment, the question is – how crucial they are for the project? A good idea – a momentous action stemming from a life-long experience – is worth more than months of pipetting. Support of a project that is high-risk is crucial for its eventual success. Building a lab that provides nurturing and stimulating environment is worth a lot too. Like a said before I don’t see a reason to ignore these intangibles in determining who ends up on the authors’ list.
Vlad, I agree with your points!
Thank you for the priceless: “a life-long experience – is worth more than months of pipetting.”
The title of The Guardian piece is what I see as the main non sequitur: ‘My professor demands to be listed as an author on many of my papers.’ If she’s your professor she should be on your papers published in the scientific literature as long as you are her student.
It is obvious that there are problems with traditional peer review, but this statement is preposterous:
“There is evidence on peer review, but few scientists and scientific editors seem to know of it – and what it shows is that the process has little if any benefit and lots of flaws.”
There’s a reason why if you go to a random predatory publisher’s website, a lot of what you’ll find is trash science like “Universe is Like Space Ship” or what have you. The reason is peer review.
Russian scientists protest!
http://news.yahoo.com/russian-scientists-stage-rare-anti-government-demo-162959888.html
Regarding the piece in the Times Higher Ed. supplement: peer review may have deep flaws, but so does that study on peer review. And without peer review, the Jenny McCarthys of the world would have an easier time of duping people via the University of Google.
As for the study on peer review (see it here: http://jrs.sagepub.com/content/101/10/507.full.pdf), here are my comments:
The paper focuses on the low number of errors cited, with figures like “an average of 2.58 out of 9 errors found” featured prominently in the Abstract, Results, and the Discussion. So finding errors was the focus.
The number of reviewers who rejected each paper is relegated to a figure legend, and there’s no discussion that rejection rates were very high: paper 1 was rejected by a 2.1 to 1 margin, with paper 2 at nearly 5 to 1, and paper 3 at 4.4 to 1. Overall, there were 1,006 decisions to reject vs. 301 to accept.
When I’m reviewing something and think it’s very bad, I reach a point where enough is enough. I suspect that many reviewers are the same: how much do you have to whip a dead horse? As a result, the study’s focus on error-catching in 3 rejection-worthy papers strikes me as seriously flawed. Personally, I think the study would have been much more informative if it had only included 2-3 important but correctable errors per paper, or, in the original study design, focused on why the 300 reviewers voted to accept.
Moreover, the Methods say nothing about the details of the training and what was expected of the reviewers. Were they told that the purpose of the study was to examine error detection rates, or just to accept/reject and explain why? If sparse information was provided and given the above points, it seems that the study ended up with a bias toward reducing the number of errors found. This is a major (fatal) flaw.
I could go on, but I’m done whipping this dead horse.
Valerie, your criticisms are valuable for the scientific community, and have been noted at PubPeer:
https://pubpeer.com/publications/54CDC9161958EBE471CC7A633C236B
For the record, for readers, the full details are:
DOI 10.1258/jrsm.2008.080062
What errors do peer reviewers detect, and does training improve their ability to detect them?
Sara Schroter1 + Nick Black2 + Stephen Evans2 + Fiona Godlee1 + Lyda Osorio2 + Richard Smith1
1 BMJ, BMA House,Tavistock Square, LondonWC1H 9JR, UK
2 London School of Hygiene &Tropical Medicine, London WC1E 7HT, UK
On a humorous note, the date of acceptance of this paper would make it possibly the oldest scientific paper, but for the submission and acceptance dates to be compatible, a “back-to-the-future” phenomenon is likely required:
Accepted 19 October 1006.
http://apsjournals.apsnet.org/doi/pdf/10.1094/MPMI-20-4-0335
Maybe they used a flux capacitor.
I read a story today that really made me concerned, especially since this appears to be legal:
http://www.businessinsider.com/this-company-will-sell-you-fake-credentials-to-get-a-real-job-2015-6
It then made me ask myself, “how frequent might this phenomenon be in science”?