10 years ago, Elisabeth Bik published a preprint heard around the world

Elisabeth Bik

If you are at all familiar with scientific sleuthing, you’re familiar with Elisabeth Bik. She is quoted so often in the mainstream media it is probably difficult to imagine a time before her supersense for spotting similarities in images wasn’t making headlines. 

But it was 10 years ago, on April 19, 2016, when she made her debut, when we covered her work screening more than 20,000 biomedical research papers containing western blots. She and coauthors Ferric Fang – a member of the board of directors of our parent nonprofit organization, The Center for Scientific Integrity, and a professor at the University of Washington in Seattle – and Arturo Casadevall, of the Johns Hopkins School of Medicine in Baltimore, posted the work as a preprint on bioRxiv.org and it appeared two months later in mBio.

The preprint was a shot across the bow for journals and publishers, and in the decade since, Bik has advised and mentored others doing similar work. In 2024, she won the Einstein Foundation Award for “identifying misconduct and potential fraud in scientific publications, highlighting science’s problems policing itself.” She donated the proceeds to The Center for Scientific Integrity to create a fund to help other sleuths do their work.

Bik spoke with us earlier this month about the paper, sleuthing and more. The conversation has been edited for clarity and brevity.

Retraction Watch: We first interviewed you in 2016 just as you and your coauthors posted your review of western blots in 20,621 papers. Ten years later, do you know what has happened with those papers?

Elisabeth Bik: Well, not all 20,000, but the 800 or so papers that I found problems in, yes. Of the 782, 177 have been retracted, 42 have an expression of concern and 256 have been corrected. And if I count all three of them, that’s 475, so 60%. 

RW: Do you think that should be 100%? 

Bik: Yeah, I would have loved it to be a little bit closer to 100%. You can see papers are still being corrected. Like this paper, for example, was retracted in 2024, but I reported it in 2015. [On Zoom, Bik was pointing at the spreadsheet she uses to track papers.] Most of these were reported to the journals in 2015.

RW: What did people think of your paper? 

Bik: It was rejected four or five times. In the end, we were like, we’ll just put it as a preprint and do an interview with Retraction Watch. 

Nobody believed this paper.  People didn’t believe I scanned 20,000 papers over a period of maybe roughly, I would say a year or two. I did a count on how much time I used to scan one paper. And it was about one minute per paper. Really, I’m not reading a paper, I’m just looking at the images. We took it out in the end because so many people were like, that’s impossible. I’m proud of it, but that’s apparently the point that breaks everybody.

People also wanted to know, ‘what is your false-positive and false-negative rates’? We weren’t quite sure. There’s no real gold standard for it. Like what is standard for image duplication? I was the first to raise this. So it’s hard to have to test it against another test. And I also don’t know how many papers I missed. I think we were more worried about claiming a positive where it wasn’t a positive. So that’s why my two coauthors were incredibly helpful. But I know I must have missed a lot of these problems. 

RW: But 782 out of 20,000 is not nothing.

Bik: Yeah, it’s 4%, or 1 in 25.

RW: You’re known for finding duplications and manipulations in images, but you started out scrutinizing papers for plagiarism. 

Bik: That is how it all started. I found that somebody had plagiarized my work. And I worked on plagiarism for nine months or so. And then I came across a Ph.D. thesis that had not only plagiarized text in the introduction, but also a duplicated image that my eye was drawn to. And that evening, I was thinking, wait, that happens? Maybe I should open a couple of PLOS One papers. And I found a couple already that evening. Otherwise, I would not have been talking to you today. Looking back, it’s one of those little moments that change your career.

RW: You had a recent correction to a paper you coauthored. 

Bik: All my papers have been criticized, scrutinized. In a way, it’s fair. I criticize others, people can criticize me. In that paper there was a splicing where we left out a group, and you could see a remnant of a line. It wasn’t like we were trying to change the results or anything. But we corrected it. We found a lot of the original data and we worked with the journal to correct it. 

All my papers have been torn apart for the weirdest reasons. You have to put so much work into addressing these things. In a way, it’s fair to be criticized, but I do feel sorry for my coauthors who are dragged into these long discussions. 

RW: Do you still scan papers by eye or are you mostly using software? 

Bik: Both. Sometimes I see the problem right away, and then I run it through Imagetwin and Proofig. Especially duplications between papers is something I’m not good at, because I cannot remember a million other papers, but the software can. Now you scan these papers and it finds, look, that blot has been used in that other paper, but it’s flipped and it’s representing a different protein. And so it’s the same photo, it’s just flipped and resized a bit. It’s very clear once you compare it, but I would never be able to remember all these blots and all these papers and see these patterns. So we’re finding more of these problems with these software tools that have these libraries of images.

RW: You, and many others – including Retraction Watch – have been accused of targeted attacks in post-publication peer review on social media. What effect does that have on your work?

Bik: It worries me a bit, especially when they tag my family. I’m always a bit worried about personal safety. Sometimes the critics will send emails to the host of an event I’m speaking at and say that I’m fraudulent. You have to say to the organizers, I’m very sorry you’re bothered by my enemies. And then, there’s talk about it. What should we do? Should we respond? Should we not respond? Emails have to be sent to all these dozens of people to not respond. It’s just a lot of work for everybody involved. And I feel so sorry that comes on top of organizing a conference, which already is a lot of work. On the other hand, I think it’s good that they see my work does result in personal criticism.

RW: Sleuths have become an essential part of the whole research integrity ecosystem. How has that changed in the last 10 years? 

Bik: I think it’s wonderful to have this growing community because this work, at least the way I do it, is very by myself, which I like. I’m a super-introvert. I don’t really work well with other people. I like to be loosely connected to a community. We’re all sort of a bunch of misfits. I love to be independent. Then there’s other communities who are meta scientists. And people working at publishers doing this work are also wonderful people. And I think all the noses are sort of starting to point in the same direction, which is lovely. It’s becoming part of what science should be. But you have to start in a way that upsets a lot of people and makes people uncomfortable. 

There’s still a lot of room to grow. I think we all agree on that. If you buy a car and the airbag is not good, there should be a recall, right? It should be better. Moving forward, all the cars should have better airbags or better wheels that don’t fall off. If we buy a product, we should be able to complain about it. There should be quality control and there should be customer service. And I think that was a bit lacking in the scientific publishing world. And both of these things are getting better. We are growing towards each other and learning from each other.

RW: One of the criticisms we’re seeing as a result of some of the big misconduct cases is the belief that they mean we can’t trust science. What do you say to that? 

Bik: I end most of my talks with this exact point. I’m talking about that one rotten apple in the fruit basket. I love science and I do this to make science better. Maybe I’m considered a vigilante because I point out the bad stuff, but it doesn’t mean that we cannot trust science. We should just do a little bit better in screening before we publish things. We should be critical. And I feel we can all agree on that. 

But it has been used, weaponized, in the misinformation era where people say, all science is fraudulent, that you cannot trust any science paper. I think that is the wrong attitude, but it’s the double-edged sword we’re working with. 

It’s very easy to draw that conclusion, but that is the wrong conclusion. We need science.


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.