In 2009, a now highly-cited study found an average of around 2% of scientists admit to have falsified, fabricated, or modified data at least once in their career.
Fifteen years on, a new analysis tried to quantify how much science is fake – but the real number may remain elusive, some observers said.
The analysis, published before peer review on the Open Science Framework on September 24, found one in seven scientific papers may be at least partly fake. The author, James Heathers, a long-standing scientific sleuth, arrived at that figure by averaging data from 12 existing studies — collectively containing a sample of around 75,000 studies — that estimate the volume of problematic scientific output.
“I have been reading for years and still continue to read this 2% figure which is ubiquitous,” Heathers, an affiliated researcher in psychology at Linnaeus University in Vaxjo, Sweden, said. “The only minor problem with it is that it’s 20 years out of date,” he added, noting that the last dataset that went into the 2009 study was from 2005.
So Heathers tried to come up with a more up-to-date estimate of scholarly literature containing signs of irregularities. “A lot has changed in 20 years,” he said. “It’s been a persistent irritant to me for a period of years now to see this figure cited over and over and over again.”
Past studies predominantly focussed on asking researchers directly if they had engaged in dishonest research practices, Heathers said, “which I think is a very bad approach to being able to do this.” But he noted that it was probably the only method available to use when the research was conducted.
“I think it’s pretty naive to ask people who are faking research whether or not they’ll honestly answer the question that they were dishonest previously,” Heathers said.
Heathers’ study pulls data from 12 different analyses from the social sciences, medicine, biology, and other fields of research. All those studies have one thing in common: The authors of each used various online tools to estimate the amount of fakery taking place in a set of papers.
“There’s a really persistent commonality to them,” Heathers said. “The rough approximation for where we end up is that one in seven research papers are fake.”
Heathers said he decided to conduct his study as a meta-analysis because his figures are “far flung.”
“They are a little bit from everywhere; it’s wildly nonsystematic as a piece of work,” he said.
Daniele Fanelli, a metascientist at Heriot-Watt University in Edinburgh, Scotland who authored the 2009 study, is not convinced by the new analysis. “Metascience research is sometimes not metascientific,” he says, arguing that the study falsely labels studies with some problem as definitely being a fake and incorrectly lumps together different studies measuring different phenomena.
“The papers are all different, they’re all over the place in highlighting all sorts of different problems in all sorts of different contexts using all sorts of different methods,” Fanelli says. “That’s not a rigorous way to get an estimate of anything.”
Fanelli said the study will draw unnecessary negative media attention: “It’s not the kind of attention that science either deserves or will benefit from.”
“I don’t think it’s entirely wrong but I think that it can be slightly misleading,” said Gowri Gopalakrishna, an epidemiologist at the Maastricht University in the Netherlands who co-authored a 2021 study that found 8% of researchers in a survey of nearly 7,000 scientists in the Netherlands confessed to falsifying or fabricating data at least once between 2017 and 2020.
Gopalakrishna said fabrication and falsification may be more prevalent in some fields than others so grouping them together may not be helpful. “If you want to get the attention of the government and try to shake things up, putting them all together and saying look how big the problem is probably useful in that way but I really do think that it’s important to drill down,” she said.
Heathers acknowledged those limitations but argued that he had to conduct the analysis with the data that exist. “If we waited for the resources necessary to be able to do really big systematic treatments of a problem like this within a specific area, I think we’d be waiting far too long,” he said. “This is crucially underfunded.”
Heathers said he decided to pursue coming up with a figure for the average percentage of fakery in science because few such estimates are available. “Even if you do something that’s an incredibly systematic review in a very formal sense, I strongly suspect you’ll get the same estimate that I’ve got,” he said.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
1 in 7? more like 1 in 3.
This discourse has so many underspecified core terms it is practically meaningless, and rhetorically dangerous. What counts as a “scientific paper”; is it determined by the status of the authors, the journal, the topic? *What* is fake; the data, the authors, the citations? And what does it mean for something to be “fake”; is it in whole or in part incorrect, fabricated, misrepresented, unsupported?
The only thing that comes out of this kind of blanket declaration is fuel for scientific skepticism, and do we really need more of that?
What if, instead of saying that “1 in 7 scientific articles are fake”, we talk about defining requirements that a text has to meet to be considered a scientific article in the first place. It’ll make for a less eye-catching declaration, but it’s eye-catching declarations that end up being latched onto, distorted, and spread.
I generally agree with your sentiment but disagree that “scientific skepticism” is a bad thing. It is, in fact, the very basis of the scientific method. It should be encouraged, and sound science should pass the muster of said skepticism.
There is a difference between healthy scientific skepticism within the scientific community that helps improve science and the kind of skepticism that allows the wholesale rejection of science by society at large.
It is dangerous to make a blanket statement like this because it will be picked up by bad actors, especially far right anti-intellectual types who will use it to undermine trust in established science (e.g. vaccines, climate science etc).
The public don’t read scientific papers. They read or watch journalists and influencers. Half the journalists never read the actual scientific article associated with a claim. They only read and/or report the press release. But press releases are written by publicists not scientists. Publicists employed by the scientist’s employer!, who think their jobs is to bring fame to their employers by hyping and exaggerating.
If scientists are happy with such publicists – they’re partly responsible for the poor standing of science today.
My academic background is environmental chemistry. The fraud in this research field is so hard to detect. For example, if a research is about contaminant degradation by microbes, usually the research could take months, even a year. Nobody would be really interested in repeating this research, unless someone found something to big too ignore. Or if the general contaminants degradation kinetic can be repeated, nobody would really say it is fraud, even if researchers may make up several data.
The problem leading to this situation (regardless of whether it’s 1 in 7 or 1 in 3 or 1 in 30 papers) is the incentivization of productivity and grant funding at the expense of an emphasis on quality and, eventually, truthfulness and accuracy. For those of us who are in science for the thrill of finding out about stuff and really learning something new, it seems strange and then downright annoying that somebody may not really be interested in that but more in career-building through high productivity and/or bending some corners (and often the two seem to go together). Granted, you can’t do science without taking the career thing seriously. Tenure is a key prerequisite for being able to do more science. But it should be just the means to the end of following your curiosity. All too often the currently prevailing incentives reward the means (not just tenure, but more productivity, more grants) while ignoring the end. I know that I probably sound hopelessly naive and idealistic. But for me that’s what it boils down to in the end.
This is why I always think doing science as a job is weird. Too many people are doing science for a job, I’d assume there are way more scientists than the actual things to discover. When scientists are penalized for not publishing meaningful/positive/eyecatching research findings, they will do whatever they can to force a “finding” into existence, either through p-hacking or outright fabrication.
In the “conflict of interest” section of every journal publication, it should always contain the statement “the scientist(s) who publish this study rely on the amazing finding described in this paper to secure funding, career, and dinner for their family.”
Science is not impartial at all, rather, it’s done by a whole world of desperate individuals who are seriously biased to publish work. Either science should be reserved for people who are not worrying about their livelihood, or the system should undergo a full revamp to ensure that scientists won’t perish if they don’t publish.
Perfect comment, completely agree. In the 1800’s a lot of science was done by the independently wealthy, think English country gentleman like William Wollaston, who discovered the element palladium.
I think what should be done now is suggest to independently wealthy people they set up research institutes that emphasize honesty and reproducibility. Then, greatly cut back on federal funds for research to academia, where most of the careerism in science occurs.
Respectfully, this is not the 1800s, and most important scientific work is being done by people who aren’t wealthy. We should be increasing the consequences for bad actors, not decreasing the funding of science.
Ah, yes, because rich people never have their own agendas which would be furthered easily by having the scientific community under their thumb.
Somehow, I think your suggestion would result in a drastically worse landscape – both the scientific landscape and our literal planetary landscape.
Quite frankly, its hard to imagine a worse landscape comparted to what we are seeing now, so why not try something new? Academic research has become largely untrustworthy, see, for example:
https://www.sciencenews.org/article/cancer-biology-studies-research-replication-reproducibility
Recently, lots and lots of fraud is being exposed in Alzheimers’ research:
https://www.science.org/content/article/research-misconduct-finding-neuroscientist-eliezer-masliah-papers-under-suspicion
I worked in academia most of my career and most of the money is wasted, IMO, by 1.) irreproducible results created by desperate underpaid grad students and post docs and 2.) overpaid (over 6 figure a year) deadwood faculty who wont retire.
This system needs to end.
Why would independently wealthy people set up honest reasearch institutes, when their pet agendas (trans-activism, climate doom, …) are better served by corrupting science? Many of seem to think that changing what scientists have to say about something ‘alters reality’. In the USA, much money from such wealthy people goes into trust funds. Such trust funds pay massive amounts to, pseudo-news, think tanks, NGOs, charities, activists, political influencers to lobby government – to promote their interests.
I have particular concerns regarding studies labeled as surveys or reviews of databases which may or may not be accessible for review. Surveys are subject to systematic bias and if science at all the lowest form of scientific inquiry. My hope is that the quality or lack of quality will find an independent systematic review process and that no survey result will be referenced as scientific that is not reviewed and graded.
Research on and the discussion of the frequency or prevalence of misconduct in research is essential. Without some understanding of the extent of the problems to be addressed, it is impossible to develop appropriate policies and procedures to deal with them.
Heathers is not wrong when he complains that current estimates are endlessly repeated without any understanding of their limitations or need for updates. Fanelli is also correct in paying careful attention to methodology before taking estimates too seriously. Research on research misconduct should be conducted as rigorously as any other researcher. Too often, the reason why it is not as rigorous as other research is it is not properly funded.
Twenty four years ago, I suggested to ORI that based on the few studies that had been done, the rate of misconduct in research was about 1%. That we do not have better estimates today reflects a serious shortcoming of current efforts to promote integrity in research. It is understandable that research funders are more interested in good news than bad. But if the potentially bad news is not recognized and addressed, research institutions may have no funding to protect.
Several dissertation conclusions in the fields of economics, statistics, business administration and related fields,arrived at based on surveys, are not dependable. The data collected are inadequate and hollow based on statistically untenable parameters. Such conclusions are transient and have no lasting value for applicability on actual fields.