Another busy week at Retraction Watch, beginning with a story that stunned even us. There was lots happening elsewhere on the web, too, particularly among science journalists taking a look at their own work:
- Some psychology researchers whose findings were replicated are now claiming they’re being bullied.
- NPR’s take on Social Psychology‘s approach to replication: “The pressure to publish original research can mean scientists are neglecting to verify the work of others.”
- “I do not fully trust peer review.” A scientist’s confessional.
- How reliable are medical research checks? asks the BBC (quotes Ivan)
- New York Times columnist Carl Zimmer has a list of words that science writers should avoid, from “miracle” to “paradigm shift.”
- On The Media interviews Gary Schwitzer of HealthNewsReview.org about what’s wrong with coverage of medical studies.
- Also on this week’s On The Media, Virginia Hughes wonders whether she really wants to keep covering such studies.
- How big a problem is self-censorship in science journalism? asks Keith Kloor.
- Do scientists and journalists share a culture of “never back down” and never admitting error? asks Andrew Gelman.
- Heralded medical treatments often fail to live up to their promise, reports the Kansas City Star (quotes Ivan).
- For vaccines, “non-industry sponsored trials were 4.42-fold…more likely to report negative or mixed findings.”
- It’s time to end ordered authorship on scientific papers, says Lior Pachter.
- Harvard’s Tom Stossel says concerns over financial conflicts of interest are overblown. Ivan disagrees.
- The scientific community’s response to a paper on resources in reproducibility exceeded expectations, say the authors.
- Neuroskeptic takes a look at a strange case of scientific criticism by pseudonym.
- Scientists are only human, but the errors in the BMJ’s statin papers show the perils of bias, says Jalees Rehman.
- “Our study suggests that neuroscience information may provide an illusion of explanatory depth.” (via Neurocritic)
- How can we make the publishing process more sound? asks Wiley’s Alice Meadows, reporting on a session at STM last month featuring Ivan, John Bohannon, Phil Davis, and Chris Graf.
- Fred Dylla, executive director and CEO of the American Institute of Physics, has his own take on the session.
- Public Citizen says the NIH interfered with an investigation of alleged ethical lapses in a clinical trial.
- Andrew Gelman dives into the data behind a correction about climate change and the economy that we reported on this week.
Quote from the first article on replication in Social Psychology:
A social psychologist says that her graduate students “are worried about publishing their work out of fear that data detectives might come after them and try to find something wrong.”
I found this statement incredibly interesting. If you’re not confident that your data can hold up to external scrutiny then you shouldn’t publish it.
Amen! Amen! Amen! Does this = fear of truth?
“If you’re not confident that your data can hold up to external scrutiny then you shouldn’t publish it.”
This is like the argument that if you have nothing to hide, then you should not worry about your government spying on you all the time. Let me explain. What this student is saying (assuming she has not faked data) is that she is worried about being unjustly accused of wrongdoing by people who, for some reason, have a lot of time on their hands, and have declared themselves protectors of science purity. What she is picking up, is the disregard for the potential of harassing potentially innocent individuals that, unfortunately, is often seen in comments at RW and similar blogs. These blogs should have some rules in place to minimize this kind of issue.
We are not talking about privacy vs security. That’s comparing apples and oranges, and completely off-topic.
Do you actually know of anyone who has been falsely accused ‘by people who have a lot of time on their hands, and have declare themselves protectors of science purity’?
I also dislike vague accusations of ‘harassment of innocent individuals’ on blogs.
The “falsely accused honest scientist” is the Sasquatch of the debate over replication and data scrutiny.
We hear so much about him or her, but can anyone point to a specific case? I have yet to see one.
Agreed. You only have to look at the extent to which some fairly skeptical people have been bending over backwards to find any possible explanation other than malpractice for Förster’s results — up to and including what boils down to “being no more competent at statistics, psychology, or lab management than a junior high school student, but not actually dishonest”. This is not remotely like the situation where the wrong person gets accused of some violent assault because they were misidentified and picked out of a line-up.
We should not be applying the standards and norms of the criminal justice system here, since we are not dealing with ordinary citizens who are just going about their business when the police come knocking; everyone who publishes research signs all kinds of forms to say that they will share data, understand their school’s policy on ethics, etc etc. Nobody came round to their house at 5am in jackboots and forced them to write a slightly over-ambitious General Discussion section at gunpoint.
Incidentally, since we’re on the subject: the criminal justice system accepts (of course, we say it’s “unacceptable”, but we accept it) a remarkably high Type I error rate, even in death penalty cases. We accept that as a society because we are worried about the consequences of a high Type II error rate. Up to now, in psychology, we have been setting the bar for Type I error in accusing people of malpractice at p < .00001, which /a/ is completely out of whack with an actual prevalence rate of at least 1-2% (http://dx.doi.org/10.1371/journal.pone.0005738) and /b/ means we have a Type II error rate that makes the field a laughing stock (http://www.psmag.com/navigation/health-and-behavior/can-social-scientists-save-themselves-human-behavior-78858/).
I found the discussion following the “just make up an elementary analysis” paper in Organometallics that occured here on RW and other places to be quite disgusting. See my previous comments at http://retractionwatch.com/2013/08/08/insert-data-here-did-researcher-instruct-co-author-to-make-up-results-for-chemistry-paper/#comment-61439 for the discussion why accusations of scientific misconduct were completely unfounded. Note that also that the journal stated in an editorial comment that was published together with the correction of the paper, that there was no evidence for fraud whatsoever.
I’m not saying that such a witch-hunt is the norm when dealing with honest mistakes, but we have to be careful not to create a climate of fear, uncertainty, and doubt, where people cannot admit any mistakes anymore and the overall quality of science will suffer, along with the people who are practicing it.
That is an interesting case, but an atypical one because in that case suspicion was raised by something someone said (and it was prima facie suspicious, even if it turned out to be innocent), rather than because of suspicious patterns in the data itself.
Bernd
Do you have any examples (say 5) where accusations of scientific misconduct has been unfounded?
We must ensure there are full, complete and transparent protocols in place at institutions which do not result in witch-hunting science-fraud hunters (aka Paul Brookes).
This isn’t the NSA snooping on your private emails, this is someone making a choice to put their data and scientific views out into the public. Replication and post-publication review are integral parts of the scientific process, it’s what every scientist should expect.
Greg, I see your point, and I tend to agree. The “if you have done nothing wrong then you have nothing to worry about” argument only works if there is a perfect justice system. That being not the case, it is perfectly justified for an innocent person such as this student, we assume, to worry about being accused of something based on circumstantial “evidence” that gets amplified by keyboard vigilantes and self-proclaimed statisticians. At some point, somebody innocent whose reputation was damaged by this kind of behavior will sue and that will help set some proper rules of the game.
Rather than slandering a whole group, can you give a specific example of the kind of thing you are concerned about?
Greg, you bring up a legitimate point that should be seriously considered. NS, if all is so great in the community, then how do you explain that the above graduate students “are worried about publishing their work out of fear that data detectives might come after them and try to find something wrong.” ? I doubt if they are alone. The implication from what you and the other are saying is that these students must have done something wrong, which would be rather insulting. Instead, we should all take this as a sign of how our activities are being perceived out there. It’s not a good sign and we should think hard about how these perceptions are managed.
Sam Dalton, yes, these students fear that the “data detectives might come after them and try to find something wrong.” This is not healthy. I would not want any of my students to feel inhibited about publishing their work just because there may be some people out there who like to stir up trouble about any piece of work they did not publish, just for the heck of it. No experiment is perfect, practicing scientists know that.
If we publish but others aren’t allowed to scrutinize our data, then we’ve moved out of the realm of science and into the realm of philosophy/religion. I’m not quite sure I understand the current backlash against replication and peer review.
“People who, for some reason, have a lot of time on their hands, and have declared themselves protectors of science purity”
That’s a pretty good description of ‘scientists’, when you think about it. Or it should be.
Greg is correct. I don’t know any academics with time on their hands for debunking other people’s work. If anything, my colleagues always complain that they have no time for anything other than doing their own research, their teaching, and administration.
Time? What is it? I used to have time as a postdoc, which may explain why some people have time.
Note: The comments from “Alex Werner,” “Sam Dalton,” and “Greg Egyz” that appear to be three different commenters agreeing with one another are all from a single IP address.
That’s absolutely priceless! Someone accuses others of having too much time on their hands… then has time to do it again… and again!
We ought not to be to sceptical, there may be an honest explanation.
There may be an important scientific conference where scientists of like-mind are using the same computer.
It is a plausable possibility.
If the commentators are indeed the same person, reading into what ‘Sam’ wrote “I don’t know any academics with time on their hands for debunking other people’s work” does give an inkling into the minds of those, clearly in the minority, who do not think anyone should question published science.
Is this a case of a ‘scientist’ attempting to bully those who wish to debunk science-fraud?
Could be actually. 4,000 psychologists were at the APS conference this weekend and I’m sure many of them were using the conference-provided wifi.
Certainly possible, but the IP address isn’t showing up as San Francisco.
Possible but Google reveals that “Greg Egyz” is a name that exists only in Retraction Watch comment threads; “Alex Werner” is not the name of a scientist that I could find (the closest match I found was a historian).
Google Scholar: Author Sam Dalton: finds little relevant, except:
Book: Pro JSP 2.
Result 9 tells us:
“Sam Dalton has worked with Java and related technologies in London for a number of years.. Sam graduated from the University of Reading in 1997 with a 2:1 honors degree in Computer Science.”He also coauthored another Java book.
Internet identities are often slippery.
Of course, the commenter Sam Dalton may be:
1) .That Sam Dalton.
2) Another Sam Dalton
3) Somebody with a different name entirely.
Given that{“Sam Dalton”, “Greg Egyz” and “Alex Werner” }:
a) Have no other visible presence in Google Scholar
b) Have no links to websites
It’s hard to know anything except the use of the same IP address, but maybe they were sitting in the same coffee shop and thus sharing its IP address via DHCP. 🙂
As far as the grad student whose comment started this, what we should say to her is, you have to publish. You have to be scrutinized. There will be assholes. If they are obvious trolls ignore them. If someone critiques your paper, you critique their critique. If you’re a scientist you should know how to do that. Since you need to, not necessarily on paper, of the papers your research is resting on. “Is this effect real?” aka “is this paper crap?”.
We academics get to disagree and swear to our hearts content but honesty is mandatory.
Investigation of other’s data and publication is termed “peer-review” and is done by scientists who are interested in interfering in their own field of research, this has been established since the dawn of formal science. I guess there are other ways of publishing data without any peer review, especially one does not intend it to hold as scientific evidence about anything = fiction/personal opinion/wishful thinking.
Another way to look at the above students’ alleged fears is as a failure of mentorship, collaboration, and peer review.
In an era of limited resources and increasingly complex datasets, it can be difficult for a trainee to be sure that they have used the best analytical methods and performed suitable data QA/QC. Consequently, graduate students are at the mercy of busy mentors, labmates, collaborators and peer reviewers, who almost certainly don’t provide adequate oversight. All of this occurs in the face of dismal research funding and employment opportunities, perhaps making the fearful, secretive graduate student may be the most sane of all.
A classic Catch-22.
Agreed. I am (initially) more worried about the social psychologist making the statement than the graduate students themselves: students need to be taught. By good example, to large extent. No one can expect students to have embedded in them the scientific process and ethical standards that we (should) use. So don’t criticise the students’ fears, but teach them.
Can you guys stop sending us to paywalled links without telling us the are? Thanks.
This is all disgusting and nauseating. I am a victim of academic bullying and plagiarism, and now I know that science is dirty business. Everyone should know it, thank you for staying on this!
The Lior Pachter commentary is of importance and the issue of authorship remains one of the corner-stones of scientific success, and integrity. Many of the comments made in that story are valid and more attention should be paid to that study. Especially, the issue of authorship definitions as defined by the ICMJE vs issues such as super-groups which have hundreds or even thousands of authors, which makes the definition of authorship almost automatically incompatible with the ICMJE, simply based on the numbers and based on the fact that authorship cannot be verified. One of the superior models of authorship that ever existed, I believe, the Hardy-Littlewood rule*, would be almost impossible to exist in this day and age where anything and everything seems to be manipulated in science. Without a basic element, trust, there can not be integral science. What retractions and increasingly hard-line editorial and publishing policies are now indicating is how trust has been gradually eroded in science and science publishing, leaving us with a robotized and skewed publishing landscape. Under such a distorted framework, science may start to attract only a very limited “type” of scientist, which contradicts the essence of science’s liberty.
* http://www.globalsciencebooks.info/JournalsSup/images/2013/AAJPSB_7(SI1)/AAJPSB_7(SI1)72-75o.pdf