We posed this question to some members of the board of directors of our parent non-profit organization, who offered up some valuable advice based on many years of experience working at journals and organizations such as the Committee on Publication Ethics (COPE).
The first step: Determine whether the fact a reference has been retracted has any impact on the conclusions of your own paper. From Elizabeth Wager, publications consultant, Sideview; former chair, COPE:
Not every study contains accurate information — but over time, some of those incorrect findings can become canonized as “fact.” How does this happen? And how can we avoid its impact on the scientific research? Author of a study published on arXiv in September, Carl Bergstrom from the University of Washington in Seattle, explains how the fight over information is like a rugby match, with competing sides pushing the ball towards fact or falsehood — and how to help ensure the ball moves in the right direction.
Would distributing all reviewers’ reports for a specific paper amongst every referee before deciding whether to accept or reject a manuscript make peer review fairer and quicker? This idea — called “cross-referee commenting” — is being implemented by the journal Development, as part of its attempt to improve the peer-review process. Katherine Brown, executive editor of Development from Cambridge, UK,who co-authored a recent editorial about the phenomenon, spoke to us about the move.
We were struck recently by a paper in Scientometrics that proposed a unique way to fund scientists: Distribute money equally, but require that each scientist donate a portion to others – turning the federal funding system into a crowd-sourcing venture that funds people instead of projects. The proposal could save the inordinate amount of time scientists currently spend writing (and re-writing) grants, but would it actually work? First author Johan Bollen, of Indiana University, explains.
Retraction Watch: You propose something quite unique: Fund everyone equally, but ask them to give a fraction of their funding to someone else. Is the idea that scientists most respected by their peers will “earn” a higher percentage of funding, and everyone is just acting as reviewers?Continue reading What if scientists funded each other?
How important is it to include a cover letter with a manuscript submission?
It seems that opinions differ. A 2013 article in Science Careers asked if it was a “relic;” but in a recent editorial, a journal editor reassures his readers that yes, he reads every cover letter — and yes, it’s important. (If you agree with him, let us know in our poll, below.)
What makes a person fabricate data? Pressure from different corners in his or her life, to get published, funding, or promotions? Are there personality traits that occur more often among those caught committing misconduct? We spoke with Cristy McGoff, the Director of the research integrity office at the University of North Carolina at Greensboro – who also has a master’s degree in forensic psychology from the John Jay College of Criminal Justice – about the minds of serial fraudsters.
To many Retraction Watch readers, the name Rolf Degen will sound very familiar – for the last few years, he’s earned quite a few “hat tips” by alerting us to retraction notices published across a wide range of fields of research, as well as research on trends in science publishing. We spoke to him about his passion for “truth, wisdom, and the scientific enterprise.”
Ever wish you could just publish an exciting result, without having to wait for the entire string of data that follows in order to tell an entire story, which then gets held up for months by peer review at traditional journals? So do a lot of other researchers, who are working on ways to sidestep those barriers. One new project: ScienceMatters, a publishing platform where scientists can submit single, robust results for relatively quick peer review. We spoke with co-founders Lawrence Rajendran and Mirko Bischofberger about how this new next-generation journal platform works, and why it’s important.
Did that headline make sense? It isn’t really supposed to – it’s a sum-up of a recent satirical paper by Columbia statistician Andrew Gelman and Jonathan Falk of NERA Economic Consulting, entitled “NO TRUMP!: A statistical exercise in priming.” The paper – which they are presenting today during the International Conference on Machine Learning in New York City – estimates the effect of the Donald Trump candidacy on the use of no wild cards (known as trump cards) in the game of bridge. But, as they told us in an interview, the paper is about more than just that.
Retraction Watch: You have a remarkable hypothesis: “Many studies have demonstrated that people can be unconsciously goaded into different behavior through subtle psychological priming. We investigate the effect of the prospect of a Donald Trump presidency on the behavior of the top level of American bridge players.” Can you briefly explain your methodology, results and conclusions?Continue reading Trump vs. trump: Does the candidate affect the use of trump cards in Bridge?