10 years ago, Elisabeth Bik published a preprint heard around the world

Elisabeth Bik

If you are at all familiar with scientific sleuthing, you’re familiar with Elisabeth Bik. She is quoted so often in the mainstream media it is probably difficult to imagine a time before her supersense for spotting similarities in images wasn’t making headlines. 

But it was 10 years ago, on April 19, 2016, when she made her debut, when we covered her work screening more than 20,000 biomedical research papers containing western blots. She and coauthors Ferric Fang – a member of the board of directors of our parent nonprofit organization, The Center for Scientific Integrity, and a professor at the University of Washington in Seattle – and Arturo Casadevall, of the Johns Hopkins School of Medicine in Baltimore, posted the work as a preprint on bioRxiv.org and it appeared two months later in mBio.

The preprint was a shot across the bow for journals and publishers, and in the decade since, Bik has advised and mentored others doing similar work. In 2024, she won the Einstein Foundation Award for “identifying misconduct and potential fraud in scientific publications, highlighting science’s problems policing itself.” She donated the proceeds to The Center for Scientific Integrity to create a fund to help other sleuths do their work.

Bik spoke with us earlier this month about the paper, sleuthing and more. The conversation has been edited for clarity and brevity.

Retraction Watch: We first interviewed you in 2016 just as you and your coauthors posted your review of western blots in 20,621 papers. Ten years later, do you know what has happened with those papers?

Elisabeth Bik: Well, not all 20,000, but the 800 or so papers that I found problems in, yes. Of the 782, 177 have been retracted, 42 have an expression of concern and 256 have been corrected. And if I count all three of them, that’s 475, so 60%. 

RW: Do you think that should be 100%? 

Bik: Yeah, I would have loved it to be a little bit closer to 100%. You can see papers are still being corrected. Like this paper, for example, was retracted in 2024, but I reported it in 2015. [On Zoom, Bik was pointing at the spreadsheet she uses to track papers.] Most of these were reported to the journals in 2015.

RW: What did people think of your paper? 

Bik: It was rejected four or five times. In the end, we were like, we’ll just put it as a preprint and do an interview with Retraction Watch. 

Nobody believed this paper.  People didn’t believe I scanned 20,000 papers over a period of maybe roughly, I would say a year or two. I did a count on how much time I used to scan one paper. And it was about one minute per paper. Really, I’m not reading a paper, I’m just looking at the images. We took it out in the end because so many people were like, that’s impossible. I’m proud of it, but that’s apparently the point that breaks everybody.

People also wanted to know, ‘what is your false-positive and false-negative rates’? We weren’t quite sure. There’s no real gold standard for it. Like what is standard for image duplication? I was the first to raise this. So it’s hard to have to test it against another test. And I also don’t know how many papers I missed. I think we were more worried about claiming a positive where it wasn’t a positive. So that’s why my two coauthors were incredibly helpful. But I know I must have missed a lot of these problems. 

RW: But 782 out of 20,000 is not nothing.

Bik: Yeah, it’s 4%, or 1 in 25.

RW: You’re known for finding duplications and manipulations in images, but you started out scrutinizing papers for plagiarism. 

Bik: That is how it all started. I found that somebody had plagiarized my work. And I worked on plagiarism for nine months or so. And then I came across a Ph.D. thesis that had not only plagiarized text in the introduction, but also a duplicated image that my eye was drawn to. And that evening, I was thinking, wait, that happens? Maybe I should open a couple of PLOS One papers. And I found a couple already that evening. Otherwise, I would not have been talking to you today. Looking back, it’s one of those little moments that change your career.

RW: You had a recent correction to a paper you coauthored. 

Bik: All my papers have been criticized, scrutinized. In a way, it’s fair. I criticize others, people can criticize me. In that paper there was a splicing where we left out a group, and you could see a remnant of a line. It wasn’t like we were trying to change the results or anything. But we corrected it. We found a lot of the original data and we worked with the journal to correct it. 

All my papers have been torn apart for the weirdest reasons. You have to put so much work into addressing these things. In a way, it’s fair to be criticized, but I do feel sorry for my coauthors who are dragged into these long discussions. 

RW: Do you still scan papers by eye or are you mostly using software? 

Bik: Both. Sometimes I see the problem right away, and then I run it through Imagetwin and Proofig. Especially duplications between papers is something I’m not good at, because I cannot remember a million other papers, but the software can. Now you scan these papers and it finds, look, that blot has been used in that other paper, but it’s flipped and it’s representing a different protein. And so it’s the same photo, it’s just flipped and resized a bit. It’s very clear once you compare it, but I would never be able to remember all these blots and all these papers and see these patterns. So we’re finding more of these problems with these software tools that have these libraries of images.

RW: You, and many others – including Retraction Watch – have been accused of targeted attacks in post-publication peer review on social media. What effect does that have on your work?

Bik: It worries me a bit, especially when they tag my family. I’m always a bit worried about personal safety. Sometimes the critics will send emails to the host of an event I’m speaking at and say that I’m fraudulent. You have to say to the organizers, I’m very sorry you’re bothered by my enemies. And then, there’s talk about it. What should we do? Should we respond? Should we not respond? Emails have to be sent to all these dozens of people to not respond. It’s just a lot of work for everybody involved. And I feel so sorry that comes on top of organizing a conference, which already is a lot of work. On the other hand, I think it’s good that they see my work does result in personal criticism.

RW: Sleuths have become an essential part of the whole research integrity ecosystem. How has that changed in the last 10 years? 

Bik: I think it’s wonderful to have this growing community because this work, at least the way I do it, is very by myself, which I like. I’m a super-introvert. I don’t really work well with other people. I like to be loosely connected to a community. We’re all sort of a bunch of misfits. I love to be independent. Then there’s other communities who are meta scientists. And people working at publishers doing this work are also wonderful people. And I think all the noses are sort of starting to point in the same direction, which is lovely. It’s becoming part of what science should be. But you have to start in a way that upsets a lot of people and makes people uncomfortable. 

There’s still a lot of room to grow. I think we all agree on that. If you buy a car and the airbag is not good, there should be a recall, right? It should be better. Moving forward, all the cars should have better airbags or better wheels that don’t fall off. If we buy a product, we should be able to complain about it. There should be quality control and there should be customer service. And I think that was a bit lacking in the scientific publishing world. And both of these things are getting better. We are growing towards each other and learning from each other.

RW: One of the criticisms we’re seeing as a result of some of the big misconduct cases is the belief that they mean we can’t trust science. What do you say to that? 

Bik: I end most of my talks with this exact point. I’m talking about that one rotten apple in the fruit basket. I love science and I do this to make science better. Maybe I’m considered a vigilante because I point out the bad stuff, but it doesn’t mean that we cannot trust science. We should just do a little bit better in screening before we publish things. We should be critical. And I feel we can all agree on that. 

But it has been used, weaponized, in the misinformation era where people say, all science is fraudulent, that you cannot trust any science paper. I think that is the wrong attitude, but it’s the double-edged sword we’re working with. 

It’s very easy to draw that conclusion, but that is the wrong conclusion. We need science.


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Weekend reads: An alternative to the impact factor in China; the clinical trials of six ‘superretractors’; Retraction Watch goes to Capitol Hill

If your week flew by — we know ours did — catch up here with what you might have missed.

The week at Retraction Watch featured:

In case you missed the news, the Hijacked Journal Checker now has more than 400 entries. The Retraction Watch Database has over 64,000 retractions. Our list of COVID-19 retractions is up to 650, and our mass resignations list has more than 50 entries. We keep tabs on all this and more. If you value this work, please consider showing your support with a tax-deductible donation. Every dollar counts.

Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):

Continue reading Weekend reads: An alternative to the impact factor in China; the clinical trials of six ‘superretractors’; Retraction Watch goes to Capitol Hill

45 editors resign from math journal, former EIC calls Elsevier publisher a ‘mini-dictator’

Forty-five of 48 members of the editorial board of the Journal of Approximation Theory resigned earlier this month for what they called Elsevier’s “concerning and potentially detrimental” decisions regarding the publication. 

Paul Nevai, formerly a professor at The Ohio State University, was appointed editor-in-chief of JAT in 1990 and held the position for 35 years until December. That’s when he reached the end of his term and Elsevier informed him they’d be filling the position with someone else. 

The mass resignation came after what Nevai said were several years of bad blood between the editors of the journal (including him) and the publisher, Giampiero Accardo. A representative for Elsevier told us designated publishers like Accardo are Elsevier employees who “oversee a portfolio of academic journals within a subject area, working closely with editors, authors, and research communities to support their development and long-term success.”

Continue reading 45 editors resign from math journal, former EIC calls Elsevier publisher a ‘mini-dictator’

Retraction Watch testifies in Congressional hearing on scientific publishing

Retraction Watch managing editor Kate Travis (center) testified April 15 in a hearing before the Investigations and Oversight Subcommittee of the House Science, Space and Technology Committee. Other witnesses were Carl Maxwell (left) of the Association of American Publishers and Jason Owen-Smith (right) of the University of Michigan.

A hearing on Capitol Hill today explored issues in scientific publishing — and Retraction Watch had a seat at the table. 

The Investigations and Oversight Subcommittee of the U.S. House Committee on Science, Space and Technology called the hearing to talk about open access, reproducibility, predatory journals, paper mills and the incentive structure in science. The wide remit meant the committee and witnesses touched on quite a few topics in 90 minutes.

Our testimony, delivered by managing editor Kate Travis, focused on the pitfalls of “publish or perish” and how an overreliance on metrics has incentivized shortcuts in research and publishing. “‘Publish or perish’ is what has allowed businesses like paper mills and predatory journals to flourish, and more recently is leading to an explosion of AI-generated papers flooding journals,” Travis told the subcommittee.

Continue reading Retraction Watch testifies in Congressional hearing on scientific publishing

“Game-changer” breast cancer study retracted as Indiana researcher out of his post

A group of cancer researchers whose work has been questioned by sleuths has been hit with their third retraction in less than a year.  

Today, Science Translational Medicine (STM) withdrew a 2021 breast cancer study by former Indiana University researcher Yujing Li and 12 other authors for image falsification. The immunotherapy study had been described by senior author Xiongbin Lu as a “game-changer” for triple negative breast cancer in a 2021 IU press release

The paper’s April 15 retraction notice states that a joint research misconduct investigation involving Indiana University, The Ohio State University, and the University of Maryland, College Park determined “falsification occurred during creation of figure S9C.” The institutions alerted the American Association for the Advancement of Science of the misconduct late last year and requested the paper’s retraction, according to Meagan Phelan, a spokesperson for AAAS, which publishes STM.

Continue reading “Game-changer” breast cancer study retracted as Indiana researcher out of his post

BMJ retracts most of a special issue for ‘compromised’ peer review and ‘improbable device use’

BMJ’s Journal of Medical Genetics has retracted the bulk of a seven-year-old special issue for an “irreparably compromised” review process and “improbable device use.” 

Of the eight papers in the 2019 special issue, seven were retracted, including an editorial that “almost exclusively” referred to the other now-retracted papers, according to a statement from the journal. 

According to the retraction notice published today, the journal’s investigation found the guest editor for the issue selected the peer reviewers, the majority of whom were affiliated with Nanjing University in China. The guest editor is not named in the issue. The publisher’s investigation also found evidence of compromised peer review in almost all articles, the notice states.

Continue reading BMJ retracts most of a special issue for ‘compromised’ peer review and ‘improbable device use’

Scientist who alleged COVID cover-up circulated a faked NIH email, agency says

Ariel Fernández

A scientist charged with research misconduct used a fake email communication with an NIH researcher’s address to support his claims of governmental retaliation, Retraction Watch has learned.  

Last month, we reported on the upholding of a proposed 15-year debarment by a U.S. Department of Health and Human Services appeals judge against Argentine chemist Ariel Fernández for falsifying research while a professor at Rice University in Houston. Administrative law judge Margaret G. Brakebusch based that May 2025 decision on findings by Rice sent to the Office of Research Integrity in 2010 and conclusions from ORI’s independent review completed in 2022. 

Fernández denied the misconduct allegations and told us the findings were retaliation by the government for a 2021 paper he wrote supporting a lab origin of SARS-CoV-2. As evidence of the contention, Fernández showed us an email purportedly from National Institutes of Health researcher Joshua Cherry dated June 2021. The email, which appeared to be from Cherry’s NIH address, threatened to resurrect Fernández’s ORI case if he didn’t remove the paper. We could not independently verify the email’s authenticity at the time.  

Continue reading Scientist who alleged COVID cover-up circulated a faked NIH email, agency says

Weekend reads: LLMs ‘are not the problem’; Cash for peer review ‘doesn’t work,’ project finds; ‘Many Flaws, Few Retractions’ in vaping literature

If your week flew by — we know ours did — catch up here with what you might have missed.

The week at Retraction Watch featured:

In case you missed the news, the Hijacked Journal Checker now has more than 400 entries. The Retraction Watch Database has over 64,000 retractions. Our list of COVID-19 retractions is up to 650, and our mass resignations list has more than 50 entries. We keep tabs on all this and more. If you value this work, please consider showing your support with a tax-deductible donation. Every dollar counts.

Here’s what was happening elsewhere (some of these items may be paywalled, have metered access or require free registration to read):

Continue reading Weekend reads: LLMs ‘are not the problem’; Cash for peer review ‘doesn’t work,’ project finds; ‘Many Flaws, Few Retractions’ in vaping literature

Canadian panel seeks to add more teeth to research oversight

Public comment is invited through April 17, 2026.

A Canadian panel is proposing several changes to its guidelines for responsible conduct of research, including a provision that effectively removes any statute of limitations on investigations into potential misconduct. 

The proposed revisions, from the Canadian Panel on Responsible Conduct of Research (PRCR), are up for public comment until April 17 and have not been made official. The PRCR is an interdisciplinary review and advisory body to Canada’s three federal research funding agencies: the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council, and the Social Sciences and Humanities Research Council. 

Continue reading Canadian panel seeks to add more teeth to research oversight

Could a national database of scientific misconduct rulings stop repeat offenders?

Mark Barnes (courtesy of Ropes and Gray LLC)

In an editorial published today in Science, Michael Lauer and Mark Barnes call for greater transparency in investigations of scientific misconduct with an aim toward making sure prospective academic employers know of applicants’ past misdeeds. As we’ve reported, in the absence of transparency around findings of misconduct, some universities have discovered too late they hired someone who has turned out to be a serial offender.

Lauer, who served as Deputy Director for Extramural Research at the National Institutes of Health from 2015-2025, and Barnes, a partner at Ropes and Gray LLC in Boston who has served as acting research integrity officer at several U.S. institutions, propose a tracking system similar to the National Practitioner Data Bank (NPDB). That database logs adverse actions and malpractice payments as a way to inform decisions about individual physicians by hospitals. As Lauer and Barnes note, federal law “requires a hospital to query the NPDB whenever it is considering a new applicant for medical privileges, as well as to conduct repeat queries every 2 years to make sure information on staff is up to date.” We asked Barnes to elaborate on the ideas presented in the op-ed. (He notes he is speaking only for himself here.)

Retraction Watch: You write in your op-ed universities may avoid sharing personal information — presumably including results of misconduct investigations — for fear of legal claims of defamation or violations of privacy. Are those fears valid? 

Continue reading Could a national database of scientific misconduct rulings stop repeat offenders?