A record-breaking year for retractions in 2011, a new record for retractions by one person — what’s going on?
Ivan will be a guest for a live chat with Science magazine today at 3 p.m. Eastern to discuss fraud, ethics, and retractions. Join him, University of Virginia psychologist Brian Nosek, and Science reporter Martin Enserink — who most recently reported on the case of Dirk Smeesters.
To participate in the chat, click here.
The overall number of papers is going up so it is expected that retractions would rise as well – I suspect in proportion but have no figures to back this up.
I would also guess that dishonesty has not changed either – this is simply a part of the human condition, deplorable though it is. Retractions are only the tip of the iceberg – there are probably many times more cases of fraud where the matter never comes to light…
Thanks for the comment. Retractions are actually rising much more quickly than the number of papers, as the Wall Street Journal and Nature have reported based on Thomson Scientific data: http://www.retractionwatch.com/2011/10/05/do-editors-like-talking-about-journals-mistakes-nature-takes-on-retractions/ There have been ten times as many retractions in the past decade as in the past, compared to just 44% more papers.
What about if you compare the number of retractions to the total number of authors that have published papers? Many decades ago, 1,2 or 3 author papers were the norm. Now 5-10+ authors are the norm.
Perhaps it’s connected to the decreases in available funding and the increased career pressure that results. Certainly all of the instances of fraud that I’ve experienced first hand appear to have been career motivated.
When I was working on my PhD the faculty adviser would threaten the foreign students with dismissal if they didn’t get the results he wanted. They were too often faced with the possibility of fabricating results or returning to their home country without a degree and in disgrace.
So, domestic students like yourself were not threatened with dismissal if they didn’t get the results the boss wanted? Was it because they did not need to be coerced to fabricate data as they did voluntarily? Or the boss knew that moral integrity of the domestic students was so rock solid that they would never fabricate data no matter what? Amuse me with your reasoning.
My experience, as a domestic observer, has been that foreign students are more vulnerable to abuse by supervisors. I’ve had my own experiences of abuse by supervisors in academia, so I know that underlings in the system have little power, but foreign students are even worse off than I was. No one believes their side of the story when there is a dispute. Administrators simply refuse to act on their complaints.
While it is true that some foreign students take advantage of the U.S. system in various ways, it is also true that foreign students are vulnerable to being taken advantage of by the system (or by the people in the system). So the advisor may frown at a domestic student and may grumble over the lunchroom table that the domestic student is not working hard enough, but the domestic student will get half an ear from administrators and will probably be able to negotiate a compromise arrangement if the matter escalates to the point of possible dismissal from the program. The foreign student will get no ear and no compromise.
One foreign student who filed a lawsuit against the university after his complaints were ignored was picked up by the INS shortly thereafter. Not hard to guess who requested that the INS agents make a special trip to the hinterland to remove a single troublesome foreigner, when much more productive raids could have been mounted on businesses in the outskirts of the capital city to fill the agency’s quota of arrests for the week.
So I can well believe Lester’s report that an advisor would direct pressure toward foreign students when possible. Domestic students can be pressured, too, but that route is inherently more difficult.
So, given the INS acronym, ‘domestic’ stands for ‘American’ in this thread, right? The rationale you give is plain institutional racism.
Nope, chirality. Not racism. Country-ism or nationalism, possibly, but more likely administrative-ism. As far as I can tell, domestic students do not suffer based on the color of their skin or based on their heritage in cases where their heritage is obvious. Foreign students and foreign employees are vulnerable because they are foreign. It is easier to deport them than to deal with their complaints. Troublesome domestic employees and domestic students can be fired, laid off in orchestrated budget reductions, dismissed from programs, and so forth, if the effort is worth it, but often it is easier to put up with their complaints or to arrange a change of situation and wait them out. They will leave eventually. Troublesome foreigners can be removed quickly.
On this thread “domestic” probably stands for whatever vantage point the writer is using at the moment. Domestic versus foreign is a distinction made by every country. A U.S. citizen currently employed in India would talk about the current domestic economic situation meaning the situation in India. The same person, referring to an earlier time, might talk about the domestic situation when s/he was in elementary school meaning the situation in the U.S.
I do not think this kind of exploitation can be chalked up to nationalism. It has more to do with the PI being an ass. Anyway, the whole system is beginning to make sense. First, the PI coerces some foreigner to fake data and produce a superficially good paper the former needs for tenure, promotion, or funding. If the fraud gets uncovered, PI can claim that the foreigner is solely responsible for it. His institution, being ‘nationalistic’, will back up his claim. The foreigner, after being conveniently removed from the country by some three-letter agency for the said fraud, will not be around to present his side of the story. I recall a few recent stories covered by RW that seemed to follow this very script. Astonishingly, this appears to be common knowledge among ‘domestic’ crowd, including PhD students, yet nobody seems to have ny problem with this.
I’m afraid the pattern you see affects all graduate students, not just foreign ones. When things goes sour, somebody has to take the fall and it’s much more likely to be the graduate student than the professor.
A recent practitioner of this art of blaming the graduate student is Dr. Dipak Das, who offered as a defense the fact that he hasn’t done lab work in decades and therefore cannot be held responsible for faulty work that comes out of his lab. He supervises it, teaches graduate students how to do it, and puts his name on all of the papers, but he is not responsible for any defects.
This happens in India as well. Let me not disclose further as they hold big scientific postions
Rajdeep/Chirality: It happens in all spheres of life and it is fairly universal ! The rich/powerful/well connected almost always get away with minimal consequences. nothing surprising Sir!
An upcoming paper in Revue d’Epidémiologie et de Santé publique favors a “surveilance bias” as it is much easier today to detect plagiarism, self-plagiarism… and as whistleblowers on fabricated data have an easier way to push their claims outside of the influence range of the one(s) they denounce.
It also reminds us that this debate is going round and round and round as the Darsee scandal or the Alsabti affair have already launched it in the 1980’s, though some procedures and institutions have appeared to solve problems such as going from authorship to contributorship.
Revue d’Epidemiologie should practise what it preaches.
http://spore.vbi.vt.edu/dejavu/duplicate/?pmids__ref__Journal__iexact=REV%20EPIDEMIOL%20SANTE%20PUBLIQUE&&type=botharticles
By the way, publishing in another language is not in itself an excuse.
In any event, a French/French example:
http://spore.vbi.vt.edu/dejavu/duplicate/77906/
Rev Epidemiol Sante Publique. 2008 Jul;56 Suppl 3:S231-8 and Bull Cancer. 2006 Jul;93(7):691-7.
English/French:
http://spore.vbi.vt.edu/dejavu/duplicate/37381/
Rev Epidemiol Sante Publique. 2006 Jul;54 Spec No 1:1S53-1S59. and Subst Use Misuse. 2006;41(10-12):1603-21.
http://spore.vbi.vt.edu/dejavu/duplicate/16621/
Rev Epidemiol Sante Publique. 2004 Oct;52(5):455-64. and Biostatistics. 2004 Oct;5(4):531-44.
http://spore.vbi.vt.edu/dejavu/duplicate/27926/
Rev Epidemiol Sante Publique. 2005 Sep;53 Spec No 1:1S79-88. and Med Care. 2003 Mar;41(3):432-41.
http://spore.vbi.vt.edu/dejavu/duplicate/46598/
Rev Epidemiol Sante Publique. 2006 Jul;54 Spec No 1:1S15-1S22. and Epidemiol Infect. 2004 Aug;132(4):699-708.
This is an opinion paper to be published in RESP, not by RESP editors.
Different language is debatable as problematic :
1/ Probably less than 1% of public health scholars worldwide read French, so I don’t see a problem in a translation to a wider international audience
2/ As RESP is also a “professionnal” journal, it gives its domestic audience a view on international litterature. I can tell you that most non-academic HCP never read an English-language journal and it has very unfortunate (to say the least) public health consequences.
Of course, the existence of another language version should be mentionned, as it is the case with translated books. I will try to check out some of your examples to see if it is the case.
The following mention is given on the French/French case :
Repris des auteurs Nora Moumjid et Alain Brémond : Révélation des préférences des patients en matière de décision de traitement en oncologie : un point de vue actuel, Bulletin du cancer, copyright 2006, vol. 93, no 7, pages 691–7, avec l’autorisation des éditions John Libbey Eurotext, Paris.
which means :
Taken from authors… (REFERENCE) with the authorization of John Libbey Eurotex Editions, Paris
Sure, you have to be French speaking to check out, the automated duplication finder on PubMed can’t tell 🙂
When you say, “Is science becoming less honest?” (actually it wouldn’t be “science” but rather “scientists”) how far back are you looking? I have the feeling (not scientific) that some of the dishonesty is linked to the importance of obtaining grant funding for the academic institution. If you read well done papers from the 70’s and earlier, I’ve noticed the investigators are much more thorough in assessing alternative explanations to their primary hypothesis (once again, not a random sample).
I would agree, both about the scientists rather than science itself becoming less honest and about one of the reasons for it. There is a huge push to publish even if there is nothing to say — to advance a career (that of the grad student and/or that of the advisor and committee members), to get grant funding, to justify money spent on a project, to achieve the quota set by the department, whatever. I, too, notice that in the past there seemed to be many more thorough projects while today there seem to be small, narrowly defined projects. I say “seem” because I am aware of several projects that were broadly conceived, in the sense that one throws the entire pot of spaghetti at the wall, but were narrowly reported as if the published results were the entire focus of the original design. Instead of multiple lines of inquiry being pursued to see if they all lead to the same conclusion, multiple lines of inquiry are pursued in hopes that at least one of them will result in a publishable finding. Although “abuses” seems like a stronger word than I want — perhaps “fudging and glossing over and careful wording” are better choices — there is plenty of this going on. I suspect there always has been some, and the current trend toward financing entire universities with grant money simply increases the pressure to pretend that valid results have been produced when the only true output is noise.
You said it nicely, JudyH.
20 years ago, if you found an image in a printed journal and you suspected it was dodgy, you couldn’t really do much about it because the printed version was all you had.
10 years ago, when journals started back-cataloging their print pages with poor resolution PDFs, the situation was no better.
Today, you can often access not just a 300dpi image in the PDF, but click-through to the original 1200dpi image submitted by the authors. Paste into potatoshop and use ORI’s droplets, and you have all the evidence you need.
Thus, I’d say the biggest reason for more retractions is it’s just easier to spot fraud these days. Apparently this news has not yet sunk in for a minority of scientists. I would add that the amount of scientific fraud is probablystill under-reported, the main reason being failure of editors to acknowledge their responsibilities in responding to anonymous whistleblowers. I’m currently 0 for 5 on getting a series of papers retracted from 2 journals, and by “0 for 5” I mean zero responses to my 5 emails.
If you are using photoshop to falsify science you are doing it wrong.
Not to say what vhedwig does here isn’t useful, but this is only catching the absolute imbeciles.
Yet the absolute imbeciles seem to make up for most of fraud. Those include plagiarists and western-blot-fraudsters, some of the most common modes of fraud in biomedical sciences. I must say that not only those are quite common in Brazil, my country, as I am certain scientific misconduct here is on the rise. And fraudsters manege to hold the highest positions, and it seems to be also the case in India, judging from one comment above. Thus, there is strong internal pressure not only to practice fraud, but also to let it go unnoticed. I must always use the example of Leonardo Gomes, who had several papers and a whole book retracted, who was never punished here and was recently ellected the youngest principal in his university.
It is not only Science being less honest nowadays, the whole society is becoming less honest…. no rewards or recognitions for honesty/ humility/humbleness….
I’m double face-palming right now at the title of this story.
The irony of it does tickle one’s sense of humor.
It would be good to get a breakdown by country (scientific fraud/total scientific output for each country). My bet is that the West is doing pretty well, and the spike is due to China, India, and similar countries without a tradition of scientific integrity and checks.
Fujii (Japan) at 172, Boldt (Germany) at 90 or so, Stapel (The Netherlands) at 20+.
Which Chinese or Indian scientist is competing with these guys?
“scientific fraud/total scientific output for each country”. Man, I wrote 4 lines and you could not even read those…
I think you will find that Fujii, Boldt, and Stapel’s retractions will cause a significant spike in retractions; and also in “scientific fraud/total scientific output for each country” for their respective countries…
Jon – I asked Thomson Reuters for a breakdown by country last year. See results at http://blogs.nature.com/news/2011/10/the_reasons_for_retraction.html . Yes, there are spikes (retracted articles/total output) in China, India and Iran.
Thanks.
From the table in the blog linked to by Richard Van Noorden
retractions as a percentage of papers originating in the country
2001 – 2005: Egypt 0.040%, Norway 0.029%, South Korea 0.020%, India 0.017%, … PRChina 0.011%
2006 – 2010: China 0.047%, Egypt 0.045%, India 0.037%, Iran 0.026%
Some numbers swing wildly from period to period (e.g., China is low one period and high the next, Norway is high in one period and does not register at all in the next). Perhaps a moving three-year average would be more consistent. From this set of data, Egypt seems to be a consistent violator, although that country merited no obvious spike in posts on Retraction Watch.
Certainly a serial violator who gets caught after a fruitful career (like Fujii and Boldt) will skew the numbers. Should these be removed from the calculation as anomalies? Or are they indications that their countrymen are doing likewise if only the country’s literature were investigated enough to uncover all the instances?
Of course discovery will be affected by violations in high-profile versus low-profile journals, in disciplines that attract a lot of attention, and so forth, so it is difficult to compare apples to apples and to come up with a comprehensive picture.
Some further points on this, JudyH. The overall numbers of retractions are so low that it is hard to know if there is any statistical significance in any of these percentage shifts. You mention Norway – well we’re looking at the difference between 9 retracted articles in one 5-year block, and fewer than 7 in the next! A rolling average over three years, meaning lower numbers still, would make it even harder to spot statistical significance, I think. See e.g. blog by Bob O’Hara for his efforts to look at statistical significance in retraction rates. http://blogs.nature.com/boboh/2010/11/17/rates-of-scientific-fraud
Also, in case people hadn’t noticed, this data refers to the publication dates of papers that are subsequently retracted, NOT the dates of retraction notices that relate to those papers. (For example, the Fujii discoveries will lead to a spike in retracted papers from Japan going all the way back to 1993). The reason for this is simple: retraction notices don’t come with country affiliations in the Web of Science database, whereas retracted papers do. So it’s easier to count up the latter than the former.
Finally – yes, it’s basically impossible to compare apples to apples when looking at the breakdown of retractions between different countries.
So really there are two things of interest here: 1) papers published in year xxxx that are later retracted, suggesting a continuing rate of misconduct, and 2) rate of retractions in year xxxx, suggesting more (or less) attention to misconduct in certain decades.
More attention can spread to cause retractions of papers published over many previous years in the case of serial violators. Multiple footnotes would be needed to flag, for example, the occasional bad actor who inflates the numbers for a country. So maybe we would need to work into the equation the number of authors who are found responsible for whatever papers are retracted. The spikes due to Boldt and Fugii would then be attributed to just one scientist in a country, not to the country as a whole.
To complicate things, the desire to pin misconduct on certain countries or cultures would support classification of offending scientists by country of origin as well as country where the violator was working when the violations occurred. Naturally, the resulting data would be only correlation, not causation, since countries with high rates of misconduct could be attracting violators rather than growing violators at home. I think a finding would be supportable only if both emigrants from Country A and immigrants to Country A have high rates of misconduct. This might suggest that the ethical climate in Country A tolerates misconduct, so that both people raised there and people who move there (perhaps intending to take advantage of the ethical climate) play by a different set of rules.
So I agree with you that the country-to-country comparison is darned near impossible to make.
Let me tell you another kind of scientific misconduct which happens in India, however i am sure it happens in many countries/groups as well. There is always handful of people who are close to the PI due to which this people gets authorship in many of the papers where they havent contributed anything. Some of these papers are not even related to them. Most of these papers are written by the PhD students and they are afraid of rasing ethical questions. I think these PI’s should read this book ” On Being A Scientist” A guide to responsible conduct in Research published by ” National Academy of Sciences, National Academy Press, Washington, D.C. Thats what my PI gave me when i came from India to US to do my post doc i dont now why may be a co-incidence or some thing else…………
Also common in Brazil, lost Indian! Dont feel lonely in this boat!
Problem is, in our contries you dont need to be powerful and well connected to get away with fraud, you just need to do it and keep a straight face.
Of course, the science is becoming less honest, as related to publications, grant applications and reports. This is because of an enormous pressure that every one in science experiences to publish, publish, publish! A researcher, and actually any one, a doctor, a uni lecturer, is judged ONLY by the number and rank of publications. Some key people report in their CVs having 800, 900+ publications (how on earth they could make them!).
Any young researcher coming to a lab – what is the main lesson his/her supervisor is teaching? To do the best science? To create an ingenious experiment? To do all the things right? No, the first and foremost (and in many cases the only) instruction and requirement from the boss is: when will you publish, what material do you already have to publish, how many papers we can make out of your data, when will we submit? Submit, submit, submit! Publish, publish, publish!
On the other hand, people need publications to have grants (i.e. bread for their families), to be promoted, to renew or gain positions. So, they do not have a choice, even those who are not happy with this publication rat race.
So, is there any hope? No, the things will become worse and worse in any foreseeable future.
Well, the only positive thing is that eventually Retraction Watch will have an IPO bigger than Facebook.
A book review by Akita Kawahara in the July 8 edition of Science says it all, or at least much of it.
The book is “A Guide to Academia” by Prosanta Chakrabarty, and the advice apparently tends always toward publication. Below are some suggestions, not quite word-for-word from the review, with some explanatory additions.
Graduate students should focus on activities that lead to publications.
The (graduate) project’s outcome should be publishable. Potential results should be considered.
A new faculty member should work to publish all as-yet-uncompleted papers and should hire post-docs to produce new findings.
The new faculty member should publish results and write grant proposals, not spend time teaching grad students in the lab. Post-docs can teach the grad students.
So it’s all publish, publish, publish, as Publish or Vanish writes. Should a grad student take on an intriguing problem that might result in “failure”? No, pick your project by the likelihood of getting publishable results. If the outcome is not known in advance, the project is too risky. Should a new professor teach? Bah, humbug. Do research? No time for that, need to write up results and submit a manuscript.
I have often reflected that it is silly for the education system to spend so many years teaching people how to be scientists and then turn them into administrators. Most professors spend more time managing their budgets than doing science. The post-docs and the grad students get to do the science, if they are inclined that way. Teaching at the university carries the lowest prestige and the lowest pay. (My most recent teaching stint had an effect pay rate of $11 per hour, after I considered the hours spent preparing lectures, preparing homework, grading tests, and answering e-mails. This is laughable.) Bringing in big grant money carries the highest prestige, but doing the actual science is not part of that. It’s the big dollars that count, not the science.
Loved the phrase “How many papers can we make out of your data?”, Publish or Vanish. Yup, that’s the most important thing. Not “How convincing could this result be if we wrap up everything we know in one manuscript?”, but “How many papers could we get by carving up the body of work into the smallest publishable units and sending the pieces to various journals?” Like sending a foot to one politician and a hand to another politician. 🙂
Another thought on comparing “now” and “before”:
A 100 years ago, or 50 years ago, what was the purpose for publishing a paper? – I think, to announce your findings and ideas to the scientific world.
What is the purpose for publishing now? – To increase your publications number, to improve your CV, to enhance your employability and grant-ability.
I think, this explains the current down-trend of honesty.
The only reason why people are trying to make their paper better is to be able to publish it in a better, higher impact factor journal. This is now. But previously, have you noticed that a number of seminal papers that led to Nobel Prizes were actually published in quite insignificant, local journals?
I think the past can be idealized. There has always been some element of ego, vanity, reputation, in science publications. And misconduct. “The Academic Marketplace” by Caplow and McGee is a funny review of hiring and promotion in academia that includes some insightful quotes about the value of publications. Numbers, please. Nobody is going to read the papers, only count them. And if you notice, Jim in “Lucky Jim” (by Kingsley Amis) finds that the manuscript he submitted to an obscure journal has been stolen by the editor of said journal and published under the editor’s name. These things were known in the past to occur.
But I agree with you that in the past people were more interested in getting their stuff into a journal where people with similar interests were likely to see it. The impact of the journal was measured in some ways by who read it. Maybe it is the proliferation of journals that makes us unable to form our own opinion of impact, so that we need somebody else to put a number on it for us. It is good that there are more outlets these days, so the volume of papers is essentially unlimited. Unfortunately, unlimited volume seems to be encouraging poor quality rather than opening up opportunities for more good papers to see the light of day.
Maybe one solution is to move back to journals that make no profit. The only people these days who make no profit are the unpaid reviewers, and that may be one reason for inadequately reviewed papers. If everybody involved is doing it as a labor of love, maybe there will be more quality and less quantity.
Also Brazilian, dear friend Publish or Vanish?
Regarding idealizing the past, I completely agree. Human nature hasn’t changed. Advances in technology have simply increased the opportunities not only to cheat, but to detect cheating, and to revel in that detection (sanctimony also being an enduring characteristic of human nature). I recall this article,”Peas on Earth,” first published anonymously in a professional journal, which deliciously takes the rise out of a giant of science:
“In the beginning there was Mendel, thinking his lonely thoughts alone. And he said: ‘Let there be peas,’ and there were peas and that was good. And he put the peas in the garden saying unto them ‘Increase and multiply, segregate and assort yourself independently,’ and they did and it was good. And now it came to pass that when Mendel gathered up his peas, he divided them into round and wrinkled, and called the round ‘dominant’ and the wrinkled ‘recessive,’ and it was good. But now Mendel saw that there were 450 round peas and 102 wrinkled ones; this was not good. For the law stateth that there should be only 3 round for every wrinkled. And Mendel said unto himself ‘Gott in Himmel, an enemy has done this, he has sown bad peas in my garden under the cover of night.’ And Mendel smote the table in righteous wrath, saying ‘Depart from me, you cursed and evil peas, into the outer darkness where you shalt be devoured by rats and mice,’ and lo it was done and there remained 300 round peas and 100 wrinkled peas, and it was good. It was very, very good. And Mendel published.”