The week at Retraction Watch featured a look at what happens to authors when a journal is delisted, a reminder of how hard it is to figure out whether a paper has been retracted, and a survey on how common plagiarism is in economics. Here’s what was happening elsewhere:
- That wasn’t plagiarism, it was “unprofessional quotation,” says Vietnam’s Academy of Social Sciences about a thesis. (Chi Mi, Vietnam.net)
- Image sleuths — including “Claire Francis” and Elisabeth Bik, whose names will be familiar to Retraction Watch readers — explain how Photoshop abuse is ruining science. Our Ivan Oransky is also quoted. The piece is one of two prompted by investigations at KU Leuven. (Maxie Eckert, Sijn Cools, De Standaard)
- “Women authors have been persistently underrepresented in high-profile journals…The percent of female first and last authors is negatively associated with a journal’s impact factor.” (preprint, bioRxiv) “These days there is an overwhelming consensus in our scientific community that scientific talent is not gendered…It is time for the journals to “lean in.” (Ione Fine and Alicia Shen, writing about the preprint, in The Conversation)
- “In other words, people do not think that we are cranks.” Nick Brown and James Heathers explain how they debunk — and why are they are so successful at it. (Medium)
- Do papers from Harvard have a better chance of being published at the New England Journal of Medicine, which is based there? A new study tries to answer. (Scientometrics)
- “[A]rticles on gender bias are funded less often and published in journals with a lower Impact Factor than articles on comparable instances of social discrimination.” (Scientometrics)
- Authors of a premier medical textbook “received more than $11 million…from makers of drugs and medical devices — not a penny of which was disclosed to readers.” (Our co-founders, STAT) Our Adam Marcus speaks to NPR’s Here & Now about the study. (Robin Young)
- “The journal Archives of Iranian Medicine just published a set of 33 papers about one study.” Neuroskeptic weighs in on a staggering case of salami slicing. (Discover)
- The failure of a key UK government minister to give evidence at a recent hearing “may lead scholars to conclude [research integrity] is ‘not a ministerial priority,'” Jack Grove of Times Higher Education reports. Our Ivan Oransky gave evidence to the same committee in December.
- “Susan Dynarski, a prominent scholar, accuses Kevin Hassett, a top Trump administration economist, of plagiarizing her work in a 2007 column.” (Andrew Kreighbaum, Inside Higher Ed)
- PEERE, a conference offering “new frontiers of peer review,” held a conference in Rome this week.
- “[I]t appears to be a remarkable breach of trust.” Did a study of Portland, Oregon-area K-12 students break U.S. federal laws? (Katie Shepherd, Willamette Week)
- “False investigators and coercive citation are widespread in academic research,” writes Allen Wilhite. (LSE Impact Blog)
- “I refuse all review requests with deadlines < 3 weeks,” says Stephen B. Heard. “Here’s why, and how.” (Scientist Sees Squirrel)
- “Prominent Columbia University neuroscientist Tom Jessell, 66, has been fired for ‘serious [behavioral] violations’ and the university is closing his lab,” Meredith Wadman reports. (Science)
- A look at the 96 retractions by Joachim Boldt finds that “retraction practices are not uniform and that guidelines for retraction are still not being fully implemented, resulting in retractions of insufficient quantity and quality.” (Christan Wiedermann, Accountability in Research)
- What could artificial intelligence (AI) mean for scientific publishing? asks Jabe Wilson. (R&D)
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
“Image sleuths — including “Claire Francis” and Elisabeth Bik, whose names will be familiar to Retraction Watch readers — explain how Photoshop abuse is ruining science.” In my humble opinion, it is not Photoshop (or Adobe, for that matter) that is to blame but the lax standards of journal editors, who allowed the transition of complete gel images to strips containing only one band. Photoshop, after all, is just a tool. This news item is also kind of below waist hit as most of us here do not read Dutch.
Have you tried reading the piece in Chrome? Google Translate does a reasonable job.
Thank you, Ivan. Now I can read the abstract, but cannot get past the “subscribe” pop-up. IMO, the idea of blaming Photoshop is based on a misunderstanding, anyway. There just can’t be a situation where technically there is no image manipulation at all, especially if an image has been acquired digitally. It is a matter of definition what qualifies as a manipulation, and what does not. Then again, this is up to journal editors and reviewers.
If you submit your email address in that pop-up, you can read the story for free. If I’m understanding your argument correctly, it’s a bit like “guns don’t kill people, people kill people,” an argument about which much has been said by people smarter than me. I’d suggest reading the entire piece before passing judgment on its arguments.
But guns are designed for the sole purpose of killing. Adjusting the brightness/contrast and even copy-pasting are not designed for the sole purpose of falsification (e.g. can paste in a scale bar).
Thank you, Ivan, for your patience. Now I could read the article, and I see that this is a good journalist piece albeit with a misleading title and with a limited understanding what constitutes image manipulation. I repeat, Photoshop is not to blame, but reviewers and editors who allowed the manuscript to pass through are. It is easy to see why “western blot” has become possibly the most frequently manipulated piece of data, besides, of course, statistical analysis. A recent technical review by Kevin A. Janes ( doi: 10.1126/scisignal.2005966) addresses the critical factors in this technique, but I personally have not met anyone in 35 years of my career who would be as meticulous as this author. Incidentally, doing western blot right has become more difficult as the use of radioactive labeling was replaced by non-radioactive labeling in recent time.
Re ‘Women authors have been persistently underrepresented in high-profile journals…’
From the preprint: ‘First we show that the proportion of women last authors in high profile research journals is much lower than the proportion of women scientists receiving USA RO1 grants or the European equivalents.’
How do the authors of the preprint know that these grants are distributed fairly? Perhaps they are, perhaps the aren’t. There is a lot more politics in giving out grants than in publishing papers, so perhaps the percieved injustice is really the other way around. As scientists we should certainly be allowed to at least entertain the possibility.
>I refuse all review requests with deadlines < 3 weeks,” says Stephen B. Heard.
Or maybe just drop the self-deception of imagining ourselves as gatekeepers serving some higher cause. My job as a reviewer does not involve assessing vague things like "interest for the journal's audience". A well-written paper, raising valid questions, and relying on adequate methodology is ready for publication, the rest will be settled by the broader community of the subfield, or is in the eye of the beholder.
True. 2-3 weeks is a reasonable time to review a concise paper with solid experiments and data, sometimes shorter with urgent communication.
It should be the editors’ job to screen such papers before sending out to reviewers. As reviewers, you can always refuse papers we are not comfortable to review or if you’re too busy.
Saying something like “I refuse all review requests with deadlines < 3 weeks" is like making an excuse for not being able to review the paper. We all want OUR papers comments to come back as fast as possible, so that comment sounds pretty arrogant to me.
Thank you; I agree! Two weeks is perfectly reasonable, and yes it is reasonable to expect reviewers to prioritize their time appropriately out of respect for their colleagues. If you really can’t do it in a reasonable time, just say no. But also remember that the number of reviews you owe the system is 3x the number of papers you’ve submitted yourself, and make sure you fulfill your obligations.
The deadline I am given is usually not a criterion I use to decide whether I am going to accept or decline the invitation.
I usually define my own deadline before accepting the invitation. If I need an extension then I ask the editor, and only accept when he or she approves.
I believe that if you are reasonable, editors will tend to approve.
> My job as a reviewer does not involve assessing vague things like “interest for the journal’s audience”.
I could not agree more. I am always trying to not assess the relevance for the journal (or please tell me how I am supposed to know the audience well enough to decide what would be of interest).
These days I am even trying to avoid making any recommendation at all. I believe I have structured my report well enough so that an editor can decide by himself/herself. At least, he/she should know the objectives/standards/audience of the journal better than me (that is for editors who actually read the reports, and do not simply base their decision on the most common recommendation).
As an EiC of a journal myself, I rather like when the reviewer makes the recommendation. Maybe what you write is true for journals with professional editors but as an academic editor your opinion is truly important for me
You have a point.
I still think that I might not be the best person to make a recommendation (unless it is a no-brainer). That’s why I always try to highlight in my report the key elements so that the editor can decide.
Say I do not identify any problem in a manuscript but I believe that the results are not really novel / exciting. I would love to just put that forward and let the editor decides whether novelty is a real issue in that case. It is the same if I believe the main problem is poor writing. The editor has probably seen many submissions that are poorly written and is therefore in the best position to either reject or ask for a revised manuscript.
Anyway, I am usually still making recommendations because systems often do not let me submit without doing so!
I’m not so sure about that “bias against research on gender bias” study. They found 355 articles on gender and 691 on race. We know that impact factor varies between fields. Maybe the smaller number of gender bias articles relative to race is just an indication that the field is smaller and so getting into a journal with an IF of 2 is actually a good sign? I personally don’t think it’s much worse than an IF of 2.5 and the qualitative IFs between gender and race studies are virtually the same – 1.64 (gender) vs 1.45 (race). The standard deviations on all of these values are also greater than the differences…
Even assuming that there is a bias against gender bias research, it certainly hasn’t stopped gender bias making news headlines, featuring regularly in top journals like Nature and Science, and driving the establishment of women-in-science committees institutions and in professional societies.