A small survey of UK academics suggests misconduct such as faking data and plagiarism is occurring surprisingly often.
The survey — of 215 UK academics — estimated that 1 in 7 had plagiarized from someone else’s work, and nearly 1 in 5 had fabricated data. Here’s how Joanna Williams and David Roberts at the University of Kent summarize the results in their full report, published by the Society for Research into Higher Education:
-Using references to support predetermined arguments rather than illuminate debate was undertaken by 38.1% (± 5.1%) respondents. This was the most frequently reported incidence of malpractice.
-36.0% (± 7.6%) of respondents reported self-plagiarising. This is more than one in three researchers.
-17.9% (± 6.1%) of academics surveyed reported having fabricated (entirely invented) research data. This is almost 1 in 5 researchers.
-13.6% (± 7.5%) of respondents reported having engaged in plagiarism.
Although these findings suggest there is cause for concern, they are higher than many of those reported by previous studies — including a 2014 paper by Roberts, which used similar survey methods and showed a fabrication rate of 0% among UK academics in biology.
For this and other reasons, these latest survey data aren’t keeping Ferric Fang — who has conducted research into academic misconduct at the University of Washington — up at night:
…I would be cautious in concluding too much from this survey with regard to the actual prevalence of specific unethical behaviors.
As part of the report, the researchers also conducted focus groups with UK academics, which offered unsurprising explanations for why researchers feel the need to cut corners, as Williams writes in the Times Higher Education:
Many of the academics we interviewed suggested that they and colleagues felt pushed into acting in ways that were if not unethical then at least lacking in integrity because of the pressures put upon them. This is seen most clearly in attitudes towards self-plagiarism. More than a third of those surveyed reported having published extracts from the same piece in more than one location. But, for some, this was not unethical but simply an efficient and common-sense means of maximising publications. As one academic explained: “I don’t think that self-plagiarism is the unethical thing; the unethical thing is the structural over-production that forces these things.”
However, Fang (who is a member of the board of directors of our parent organization) raised concerns about the authors’ survey method — specifically, about the accuracy of a technique used to elicit truthful responses from people to sensitive topics.
As part of “Academic Integrity: Exploring Tensions Between Perception and Practice in the Contemporary University,” Williams and Roberts relied on an “unmatched count” technique (UCT) to get people to honestly answer questions about misconduct. Here’s how they explain UCT in the paper:
The method involves randomly assigning participants to one of two groups: the control (baseline) group or the treatment group.
The control group was given a list of non-sensitive statements such as, ‘Last year I published fewer than 3 papers.’ Participants were then asked to indicate how many – but importantly not which – statements applied to them. The treatment group received the same statements but this time with the addition of a sensitive statement such as, ‘In the past 5 years I have fabricated (made up) research that was then published.’ They were also asked to indicate how many, but not which, statements applied to them. Participants in the treatment group are more likely to respond truthfully due to the protection the method affords them.
In other words: participants would have to indicate how many of five characteristics applied to them. In the “treatment group,” one of the five would be related to misconduct, such as “In the past 5 years I have fabricated (made up) research that was then published.” The authors would then compare the totals between the treatment and control groups, who didn’t have any options related to misconduct:
The proportion of the sample engaged in a particular sensitive behaviour was calculated as the difference in the mean number of statements between the control and treatment groups.
For a sample questionnaire, see page 36 of the full report.
Williams acknowledges in the THE piece that “this technique has quite a wide margin of error;” indeed, when the authors questioned respondents directly about misconduct, the rates of each offense were sometimes much lower than what the UCT method revealed. (The final table includes data from both the direct questions and the UCT technique.)
Fang agreed that the “uncertain reliability of the UCT method and the large error for data obtained by this method” cause him to question the findings, along with the fact that the survey included relatively few respondents.
Stanford University’s Daniele Fanelli, who studies misconduct, raised additional concerns about the methodology:
By having the survey emailed online, and then encouraging authors to divulge it, they have quite clearly allowed for a self-selection of the sample. Respondents sampled in this way were quite likely to be especially motivated, for one reason or another, to respond. The eagerness of respondents appears to have been so significant that the authors found significant numbers of duplicate questionnaires as well as people responding from outside the UK.
Even more importantly, the sample thus obtained is quite clearly unrepresentative of the research world, let alone the world of science. The demographics are severely skewed, 27% of respondents are from the Arts and Humanities, where no research data is usually produced, and another 46% in the social sciences, which includes qualitative research. If this wasn’t unrepresentative enough, the self-reported average publication rate of the majority of respondents is one paper per year. Quite clearly, respondents in this survey are mostly non-scientists and mostly non-researchers.
Fanelli added:
Their findings are quite obviously extreme and rather at odds with the pretty much the rest of the literature. Respondents, according to the UCT methods, commit data fabrication at three times the rate than falsification, and at higher rates than plagiarism. Paradoxically, relatively more realistic answers came from the uncorrected, direct questions, which are still rather extreme, compared to dozens of existing surveys.
Here’s a link to the full survey data. The authors defined “fabrication” as making up data, “falsification” as misrepresenting results, “ethics form misuse” as cutting corner on disclosing all ethical issues, “reference misuse” as cherry-picking papers to support the argument, and “authorship abuse” as adding your name to a paper without merit.
Fang concluded:
Although the prevalence of scientific misconduct suggested by this and other surveys is worrisome, I actually take some consolation from the comments of individual researchers, which demonstrate that although scientists are under substantial career pressures, at least some exhibit considerable insight into the potentially corrosive effects of these pressures on their behavior and express a desire to do the right thing.
Williams told us she believes the findings should be a wake-up call for the UK system of research:
I think the problem in UK universities is that there is too much interference in the research process. Issues of ethics and integrity have been reduced to a bureaucratic ‘tick-box’ exercise. In fact, research itself has been reduced to a game where many academics have become more concerned with scoring points in the ‘REF’ (national Research Excellence Framework that determines the distribution of government funding to institutions) than in the pursuit of knowledge. For many academics ethics has become a series of hoops to jump through. I think universities need to permit academics more autonomy so they can be responsible for their own work. I don’t think integrity can be legislated into existence.
The solution: Step away from the “tick-box” exercise, Williams concluded:
I think institutional managers urgently need to consider the messages that they give out, particularly to junior academics. For younger academics especially, compiling a successful REF submission has become more important than researching with integrity. At a national level, the unintended consequences of such a high-stakes, narrow and managerial approach to evaluating research must be urgently considered. We need to get over the simplistic notion that when it comes to research obedience to a certain set of rules equals a quality contribution to the pursuit of knowledge.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
This is not surprising for UK or for other countries….there is too much competition and the pressure to publish as a condition to get a tenured position, get funding is also huge all around the world…
I agree wholeheartedly with Anonymous. Gone are the days when competition wasn’t so intense and jobs were more available. But, at the same time, sacrificing ethical behavior just to get ahead in the academic world, does not bode well for the future, individually and collectively.
The current system (reward structure and publication bias for “interesting” findings/narratives, etc) must be allowed to DIE completely, before a new one emerges. Hopefully, it will be one that emphasizes seeking truth, rather than going after “interesting” narratives (which is the primary reason for misconduct).
I would expect that most medical researchers have at some stage done something that while it might not be considered misconduct is a rather dubious research practice, and for some it will be the normal mode of operation. Arbitrary choice of outcomes, varying the models and varying the statistical methods until something that appears significant is found to satisfy the reviewers. The problem is that most don’t even know that they are doing something wrong.
I would expect that most areas using observational and complex experimental data would be the same.
Publish or perish!!!That is what is fuelling scientific misconduct. Unless new benchmarks are established to assess research capacity, decide on career advancement or allocate research grants, data fabrication, plagiarism and other forms of scientific misconduct will unfortunately continue to prevail.
This is click-bait, not science. Skewed database and questionable extrapolation methods to assume people really did do things they said they didn’t.
1) it may count as misconduct itself
2) it may also be considered slander on the whole community.
What I’ve learned from this survey is that if they’re going to assume that my answers to some questions mean I’m lying on others, I will not participate in any more surveys at all.
How reliable are these numbers? And what can they tell us on the actual frequency of research misconduct? Below it will be argued that, while surveys asking about colleagues are hard to interpret conclusively, self-reports systematically underestimate the real frequency of scientific misconduct. Therefore, it can be safely concluded that data fabrication and falsification –let alone other questionable research practices- are more prevalent than most previous estimates have suggested.