
An expert in AI at Google has admitted he used the technology to help write a preprint manuscript that commenters on PubPeer found to contain a slew of AI-generated phrases like “squared blunder” and “info picture.”
The paper, “Leveraging GANs For Active Appearance Models Optimized Model Fitting,” appeared on arXiv.org in January but was withdrawn April 7. The author, Anurag Awasthi, is an engineering lead in AI infrastructure at Google. In a PubPeer comment, he described the paper as a “personal learning exercise.”
In March 2025, sleuth Guillaume Cabanac, creator of the Problematic Paper Screener, pointed out in a PubPeer comment the paper included several tortured phrases. These phrases indicate AI use and occur when large language models try to find synonyms for common phrases. In Awasthi’s paper, “linear regression” became “straight relapse,” and “error rate” became “blunder rate,” among others.
Awasthi replied to the comment saying “phrasing issues were unintentional artifacts from an earlier revision where automated tools were used to rephrase for variety.”
Another PubPeer comment pointed out similarities between Awasthi’s preprint and a 2016 paper by different authors. Aside from containing similar structure, Awasthi’s paper uses much of the same language.
Awasthi again replied on PubPeer reiterating the overlap was an “unintended artifact.” His comment reads, in part:
This preprint was intended to explore a new idea—to the best of my knowledge—and I shared it early to test its relevance. The project initially began as a personal learning exercise but sensing a possible novelty, turned into a publication attempt. The layout and literature review, drafted with the help of AI-assisted tools, ended up too close in phrasing to earlier work—particularly the one you’ve highlighted.
After receiving further criticism about the undisclosed AI use, Awasthi replied that he “clearly underestimated the seriousness of preprints.”
He responded to our request for comment by directing us to the Google press office, which did not respond.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].