Weekend reads: A new plagiarism euphemism; how Photoshop abuse destroys science; bias against women authors

The week at Retraction Watch featured a look at what happens to authors when a journal is delisted, a reminder of how hard it is to figure out whether a paper has been retracted, and a survey on how common plagiarism is in economics. Here’s what was happening elsewhere:

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

15 thoughts on “Weekend reads: A new plagiarism euphemism; how Photoshop abuse destroys science; bias against women authors”

  1. “Image sleuths — including “Claire Francis” and Elisabeth Bik, whose names will be familiar to Retraction Watch readers — explain how Photoshop abuse is ruining science.” In my humble opinion, it is not Photoshop (or Adobe, for that matter) that is to blame but the lax standards of journal editors, who allowed the transition of complete gel images to strips containing only one band. Photoshop, after all, is just a tool. This news item is also kind of below waist hit as most of us here do not read Dutch.

      1. Thank you, Ivan. Now I can read the abstract, but cannot get past the “subscribe” pop-up. IMO, the idea of blaming Photoshop is based on a misunderstanding, anyway. There just can’t be a situation where technically there is no image manipulation at all, especially if an image has been acquired digitally. It is a matter of definition what qualifies as a manipulation, and what does not. Then again, this is up to journal editors and reviewers.

        1. If you submit your email address in that pop-up, you can read the story for free. If I’m understanding your argument correctly, it’s a bit like “guns don’t kill people, people kill people,” an argument about which much has been said by people smarter than me. I’d suggest reading the entire piece before passing judgment on its arguments.

          1. But guns are designed for the sole purpose of killing. Adjusting the brightness/contrast and even copy-pasting are not designed for the sole purpose of falsification (e.g. can paste in a scale bar).

          2. Thank you, Ivan, for your patience. Now I could read the article, and I see that this is a good journalist piece albeit with a misleading title and with a limited understanding what constitutes image manipulation. I repeat, Photoshop is not to blame, but reviewers and editors who allowed the manuscript to pass through are. It is easy to see why “western blot” has become possibly the most frequently manipulated piece of data, besides, of course, statistical analysis. A recent technical review by Kevin A. Janes ( doi: 10.1126/scisignal.2005966) addresses the critical factors in this technique, but I personally have not met anyone in 35 years of my career who would be as meticulous as this author. Incidentally, doing western blot right has become more difficult as the use of radioactive labeling was replaced by non-radioactive labeling in recent time.

  2. Re ‘Women authors have been persistently underrepresented in high-profile journals…’

    From the preprint: ‘First we show that the proportion of women last authors in high profile research journals is much lower than the proportion of women scientists receiving USA RO1 grants or the European equivalents.’

    How do the authors of the preprint know that these grants are distributed fairly? Perhaps they are, perhaps the aren’t. There is a lot more politics in giving out grants than in publishing papers, so perhaps the percieved injustice is really the other way around. As scientists we should certainly be allowed to at least entertain the possibility.

  3. >I refuse all review requests with deadlines < 3 weeks,” says Stephen B. Heard.

    Or maybe just drop the self-deception of imagining ourselves as gatekeepers serving some higher cause. My job as a reviewer does not involve assessing vague things like "interest for the journal's audience". A well-written paper, raising valid questions, and relying on adequate methodology is ready for publication, the rest will be settled by the broader community of the subfield, or is in the eye of the beholder.

    1. True. 2-3 weeks is a reasonable time to review a concise paper with solid experiments and data, sometimes shorter with urgent communication.

      It should be the editors’ job to screen such papers before sending out to reviewers. As reviewers, you can always refuse papers we are not comfortable to review or if you’re too busy.

      Saying something like “I refuse all review requests with deadlines < 3 weeks" is like making an excuse for not being able to review the paper. We all want OUR papers comments to come back as fast as possible, so that comment sounds pretty arrogant to me.

      1. Thank you; I agree! Two weeks is perfectly reasonable, and yes it is reasonable to expect reviewers to prioritize their time appropriately out of respect for their colleagues. If you really can’t do it in a reasonable time, just say no. But also remember that the number of reviews you owe the system is 3x the number of papers you’ve submitted yourself, and make sure you fulfill your obligations.

    2. The deadline I am given is usually not a criterion I use to decide whether I am going to accept or decline the invitation.

      I usually define my own deadline before accepting the invitation. If I need an extension then I ask the editor, and only accept when he or she approves.

      I believe that if you are reasonable, editors will tend to approve.

    3. > My job as a reviewer does not involve assessing vague things like “interest for the journal’s audience”.

      I could not agree more. I am always trying to not assess the relevance for the journal (or please tell me how I am supposed to know the audience well enough to decide what would be of interest).

      These days I am even trying to avoid making any recommendation at all. I believe I have structured my report well enough so that an editor can decide by himself/herself. At least, he/she should know the objectives/standards/audience of the journal better than me (that is for editors who actually read the reports, and do not simply base their decision on the most common recommendation).

      1. As an EiC of a journal myself, I rather like when the reviewer makes the recommendation. Maybe what you write is true for journals with professional editors but as an academic editor your opinion is truly important for me

        1. You have a point.

          I still think that I might not be the best person to make a recommendation (unless it is a no-brainer). That’s why I always try to highlight in my report the key elements so that the editor can decide.

          Say I do not identify any problem in a manuscript but I believe that the results are not really novel / exciting. I would love to just put that forward and let the editor decides whether novelty is a real issue in that case. It is the same if I believe the main problem is poor writing. The editor has probably seen many submissions that are poorly written and is therefore in the best position to either reject or ask for a revised manuscript.

          Anyway, I am usually still making recommendations because systems often do not let me submit without doing so!

  4. I’m not so sure about that “bias against research on gender bias” study. They found 355 articles on gender and 691 on race. We know that impact factor varies between fields. Maybe the smaller number of gender bias articles relative to race is just an indication that the field is smaller and so getting into a journal with an IF of 2 is actually a good sign? I personally don’t think it’s much worse than an IF of 2.5 and the qualitative IFs between gender and race studies are virtually the same – 1.64 (gender) vs 1.45 (race). The standard deviations on all of these values are also greater than the differences…

    Even assuming that there is a bias against gender bias research, it certainly hasn’t stopped gender bias making news headlines, featuring regularly in top journals like Nature and Science, and driving the establishment of women-in-science committees institutions and in professional societies.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.