Exclusive: Thousands of papers misidentify microscopes, in possible sign of misconduct

One in four papers on research involving scanning electron microscopy (SEM) misidentifies the specific instrument that was used, raising suspicions of misconduct, according to a new study. 

The work, published August 27 as a preprint on the Open Science Framework , examined SEM images in more than 1 million studies published by 50 materials science and engineering journals since 2010. 

Researchers found only 8,515 articles published the figure captions and the image’s metadata banners, both of which are needed to determine whether the correct microscope is listed in papers. Metadata banners usually contain important information about the experiments conducted, including the operating voltage of the microscope and the instrument’s model and parameters. 

Of these papers, 2,400 (28%) listed the wrong microscope manufacturer or model, raising questions about the integrity of the conducted research. 

“This is a very weird mistake to make,” says study coauthor Reese Richardson, a doctoral candidate at Northwestern University in Evanston, Illinois. “It’s like if you wrote all of your code in your manuscript in R and then you said we used Python for all software development.”

For Richardson, there are telltale signs that suggest many of the images in question originate from the same source. For instance, captions in many different papers refer to using a Czech Hitachi instrument even though Hitachi is a Japanese company. 

“This is a sort of bizarre mistake,” Richardson said. “Yet it shows up repeatedly with the same author present on all of these articles. So this is just an instance where the author is reusing text over and over and over again.” What’s more, in many of these papers, the authors used an instrument from Tescan, not Hitachi, Richardson said, referring to two major manufacturers of the devices. 

The levels of misreporting the study found are “genuinely concerning,” said Angus Wilkinson, a materials scientist at the University of Oxford, in the U.K., and a part-time scientific sleuth, who was not involved with the new analysis. 

In some cases, articles with no authors in common nonetheless contained identical typos in the section describing the microscopes the researchers used. “It is pretty clear that these articles were generated or were synthesized at scale without regard for the actual results or the quality of the articles,” Wilkinson added. 

Many papers also misidentified figures as originating from “Amirkabir,” Richardson pointed out, yet none of the authors was affiliated with Amirkabir University of Technology in Tehran, Iran. “So it seems very likely that someone at Amirkabir University is just leasing out these images to be used by other people,” he said.

For many of the articles with the odd mistakes, Richardson and his team began to suspect misidentified microscopes were a sign of a manuscript originating from paper mills, which typically churn out bogus or plagiarized papers and sell author slots on them in exchange for a fee. 

Wilkinson agreed: “The majority of these cases probably are either poor-quality articles at best where that attention to detail has been lacking or indeed are fingerprints of paper mills.”

It’s technically possible researchers had more than one microscope in the lab and named the wrong one by mistake, noted Nick Wise, a fluid dynamics researcher at the University of Cambridge, in the U.K., whose sleuthing work has resulted in hundreds of retractions. But Wise also thinks a significant proportion of the 2,400 studies could be researchers blindly reusing figures that aren’t theirs. “It’s a symptom of academic fraud,” he said.

A key limitation of the study is its reliance on authors themselves revealing what instruments they used in figure captions and not cropping out relevant image metadata, Richardson said. 

But he doesn’t blame journals for the missing image metadata, as editors generally have cropped these out, often to make sure papers fit in print. This practice changed recently when the FAIR principles for scientific data were established, and Richardson said journals now play an important role in ensuring full documentation of image metadata. 

The researchers found only 43 of the 2,400 suspect papers had been flagged on PubPeer for other issues such as duplicated images, manipulated data and hand-painted graphs. 

Richardson said he believes many more of the papers would have ended up on the online forum if someone had scoured through them or run them through AI tools like ImageTwin that can identify integrity issues in figures of scientific manuscripts.

According to Richardson, some authors in the study sample made identical mistakes in the figure captions across several different studies. The most productive authors, who published hundreds of studies since 2010, got the SEM wrong 20% to 50% of the time, he said. 

“The most prolific author that made this mistake in their articles had done it more than three dozen times and they tended to publish very frequently in the same journals,” Richardson added, “as many as 16 times in the same journal in the same publication year, and a lot of these articles of theirs had already been flagged on PubPeer for other data integrity issues.”

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

Processing…
Success! You're on the list.

4 thoughts on “Exclusive: Thousands of papers misidentify microscopes, in possible sign of misconduct”

  1. Some of the blame is to be passed on to PIs as well. They encourage other researchers to do the work for the first author, without the author actually understanding why they need this data in the first place or how it has been collected. The author gets the data, but then in the hurry to publish, do not send a copy of the paper to all the co-authors for re-checks. If the PIs properly review the paper internally before submissions, they should be able to pick such common errors.

    1. I would disagree here slightly. Even after the submission for peer review there are ample opportunities to change an incorrect figure, such as during peer review responses and when proofing the manuscript after acceptance. Even after the publication you can write an erratum. (But yes, of course I agree that the corresponding author should always contact all the authors before submission.)

  2. Wow, this is some fantastic work. Completely expected, however, given the lack of familiarity with different microscopes and spectroscopy and how they work by the layperson. While it does seem possible some issues might arise from translation issues or potentially issues with the microscopes, like pirated software being used by 3rd party sellers on Amazon giving false metadata with potentially counterfeit labels on the products to appear like higher end equipment, even in that case it seems like a massive failure on the part of scientific rigor if not outright fraud and dishonesty. A recent article comes to mind, actually, where an anonymous researcher called foul on a paper for stolen images that had been deleted and I thought it sounded massively suspicious, but this makes me wonder if it was related to this, or that the images were originally deleted because of the connection to a University in Iran and the potential legal implications, which would explain why the researcher whose data was stolen chose to remain anonymous. Or perhaps because they worked for a paper mill like organization and had been screwed over by Mr. Saleh and were seeking to fire back. I bet these researchers are wishing they remained anonymous instead of paying to have their name attached to the equivalent of scientific garbage.

  3. DISCLAIMER: This is just a speculation and should be treated as such.

    Papermills have huge libraries of micrographs (we’ve seen this for blots, flow cytometries, tumors, etc.) On the other hand, they have text templates for Methods.

    Well, they just put them together, along with other templated elements of the to-be article, without much care about the consistency between this and that.

    As for the coverage (what fraction of misidentified SEMs the proposed method is flagging), one can compare with the comments by this PubPeer user: https://pubpeer.com/search?q=%22conidens+laticephalus%22 – most of their posts are about this very issue. The authors report 43 papers commented on PubPeer, so most of the misidentifications (~100) picked up by Conidens were not picked up.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.