Like Howard Beale, the character in 1976’s “Network” who famously said “I’m as mad as hell, and I’m not going to take this anymore!,” the editors of the journal Cortex have decided they’ve had enough when it comes to plagiarism.
From an editorial in the current issue:
We will treat academic plagiarism as a misdeed, not as a mistake, and a paper that plagiarizes others will be immediately rejected and may be reported to the author’s academic affiliation.
The editors explain how:
…we at Cortex will take advantage of the screening tool to detect plagiarism and self-plagiarism. CrossCheck powered by iThenticate. All papers not straight rejected in the first triage editorial step, will be passed through this screening to filter their academic content. We are well aware that these tools are not infallible, and prone to false positives (e.g., Vihinen, 2009) and false negatives, but we believe that this is a further step to make our work more transparent and the scientific community even more trustworthy (see also Chambers, 2013).
Speaking of iThenticate, here’s a recent survey they did on plagiarism of more more than 300 scientists in 50 countries.
Hat tip: Rolf Degen
They should not identify the exact tools to be used.
This problem would be rather easy to root out if tools such as “CrossCheck (powered by iThenticate!)” were made freely available to authors, either by their universities or as a pay-per-use web-service or as some open-source variant of the software through NCBI. Then, if the authors were asked to tick off a box certifying that they ran a check before submission (perhaps including a plagiarism-free score?) then there would be no wiggle room.
There is no need for plagiarism tools to be available to authors. If you don’t plagiarize, you don’t need to check.
Thank you!
I disagree with this statement somewhat for two reasons… the most important: while the tool needn’t be available to authors, it should be available to the scientific community! I should be able to run my own check and determine if I agree with an editors decision. the less important is that, while I take a fairly hard line on plagiarism, making a tool such as this available to authors would A) get rid, once and for all, of those obnoxious retractions claiming that the author simply didn’t realize that cut-paste was an improper way to write an article and B) be an excellent way to help honest people avoid accidental sentence appropriation (when, after you’ve read smith et al., 2006 for the three hundredth time, and then go write your paper, you subconsciously slip a few lines in there without knowing). Having this tool available could tip one off to something like this, allow them to cite the thought and either use quotations or adjust the language appropriately.
QAQ: If the paper by Smith et al. (2006) is so influential in your field, just cite Smith et al. (2006) in your manuscript, and acknowledge the key contribution of Smith in your introduction! A systematic check for accidental similarity with other published papers is an utter waste of your time.
yes, I mean to say that obviously Smith et al., should be cited in your intro if you’ve read it 100x. I guess my point is that sometime accidental misappropriation, not of paragraphs, but maybe of a sentence or a style could occur. I can’t see a scenario where allowing an author to ensure that everything is without question their own is a bad thing.
Well, there are papers published with a dozen authors (or more!) on them. Such papers are often written collaboratively, with varying contributions from each of the authors. I still think it’s a good idea for the authors to run an “iThenticate” type check, so that this issue can be sorted out before submission. This final check could perhaps be the responsibility of the senior author. Why should the scientific community, publishers and journals shoulder this burden that is easily avoided if a ‘certified’, mutually acceptable tool is available?
Let us not forget that for many contributors to the scientific literature English is not their primary language and some of these authors sometimes borrow words and short phrases from sources (i.e., patchwriting). As such, these authors can benefit from text-matching tools to ensure that their submissions are acceptable. One need not spend any money to check one’s work, for there are free programs that can help in this regard. One such free tool, Wcopyfind, was developed by Lou Bloomfield some years ago and is frelly available at: http://plagiarism.bloomfieldmedia.com/z-wordpress/software/.
What is the downside to making the tools available to the scientific community? Imagine the situation where two labs are collaborating and a postdoc in one of the labs has done the majority of the writing. As the PI in the other lab wouldn’t you like a quick and easy way to check that the manuscript won’t be rejected out of hand, and that your academic institution won’t be notified of your misdeed? I realize that author contribution statements make clear who prepared the text, but how carefully are the journals going to follow that, or will each and every author be held responsible?
By default, submitted manuscripts are obviously non-plagiarized. If you are going for cash, do you ask the bank officer to certify that banknotes are not forged?
This post is headed to oblivion, and yet, I am fascinated by the downvotes. I see absolutely no downside to making anti-plagiarism tools freely available. Why go through the drama of publication followed by a retraction? Why should one coauthor be penalized because another one plagiarized his part of the paper in a collaboratively written manuscript? If the last step a corresponding author took were to run the ‘submission-ready’ manuscript through a ‘standard’ software tool, then should plagiarized sections pop up, they can go back to the drawing board and sort it out on their own time (& dime!). Why waste the time of editors, reviewers, publishers and readers until accidental discovery and retraction?
Obviously this does not apply to the lunatics who plagiarize entire papers, but then such mega-fraudsters are in a class by themselves.
Is there going to be some human review or will the results of a tool “prone to false positives” be taken as is? That does not seem like a good idea.
It is hard to imagine that they would declare plagiarism without a human check. This is a fine example of computer-augmented intelligence:
1) computers do what they do well that humans do not, ie, FIND plausible matches in online material.
2) humans then have more time for applying judgment. As it stands, plagiarism discovery all too often depends on some reader happening to notice something.
But I’ve seen plagiarism where people copied many chunks of text, then made minor changes. I don’t know if they were using some tool and then making enough changes to get by the tool.
Presumably they will do the same for re-used data, including cases where data are re-used to describe different experiments?
Is plagiarism our biggest issue in scientific publications? So it is OK to plagiarize reviewers comments and suggested paragraphs verbatim, but it is a misdeed if you “self plagiarize”?