
The reviewer, a neuroscientist in Germany, was confused. The manuscript on her screen, describing efforts to model a thin layer of gray matter in the brain called the indusium griseum, seemed oddly devoid of gist. The figures in the single-authored article made little sense, the MATLAB functions provided were irrelevant, the discussion failed to engage with the results and felt more like a review of the literature.
And, the reviewer wondered, was the resolution of the publicly available MRI data the manuscript purported to analyze sufficient to visualize the delicate anatomical structure in the first place? She turned to a colleague who sat in the same office. An expert in analyzing brain images, he confirmed her suspicion: The resolution was too low. (Both researchers spoke to us on condition of anonymity.)
The reviewer suggested rejecting the manuscript, which had been submitted to Springer Nature’s Brain Topography. But in November, just a few weeks later, the colleague she had consulted received an invitation to review the same paper, this time for Scientific Reports. He accepted out of curiosity. A figure supposed to depict the indusium griseum but showing a simple sine wave baffled him. “You look at that and think, well, this is not looking like an anatomical structure,” he told us.
Like his colleague who had looked at the manuscript before him, he also felt the text read like technobabble produced by a large language model – full of scientific-sounding sentences without much real meaning.
The two reviewers compared notes. Based on the first review, the author had swapped the irrelevant MATLAB functions in the text for others. But the results and images remained the same. “Nice try, my friend,” the second reviewer recalled thinking. “But forget it.”
At the end of November, however, he again was asked to review a manuscript by the same author, associate professor Eren Öğüt of Istanbul Medeniyet University, a public institution. The new paper, submitted to Neuroinformatics, dealt with a different brain structure, but the abstract geometric shapes it presented were beginning to look familiar.
With growing suspicion, the reviewer looked Öğüt up online. In 2025 alone, the Turkish academic had published 25 papers – nearly all of them in Springer Nature journals – and 12 of those were single-authored. Perhaps even more remarkably, Öğüt also managed to review nearly 650 papers that year, according to Clarivate’s Web of Science. Among the more than 1,400 total reviews he has done, 379 were for Elsevier, 225 for Wolters Kluwer Health and 139 for Springer Nature.
Öğüt has also served as an editor for several journals from major publishers and is currently listed as an associate editor of Springer Nature’s European Journal of Medical Research. He teaches classes on anatomy and neuroanatomy and claims to be a member of Sigma Xi, a scientific honor society based in the United States.
To the two reviewers in Germany, Öğüt’s level of productivity did not seem humanly possible. Rather, they told us, it appears to be accomplished through reckless use of generative AI. One indication they could be right: Öğüt’s reviews average 364 words each, which is just a single word more than the average review length calculated from 11 million reviews.
Extensive review activity can help burnish a resume, the reviewers told us, and journal editors might also be more friendly toward manuscripts coming from a diligent reviewer.
‘Rigorous’ peer review
Öğüt defended his publications, telling us some had been under way for several years, and said his review activity reflected a team effort.
“The appearance of both newly completed and previously developed studies being published within the same year was coincidental rather than indicative. Importantly, all manuscripts were submitted through standard journal procedures and underwent rigorous editorial handling and peer-review processes,” he told us by email.
“We use AI tools for editing or improving sentence clarity, just as many other researchers do,” he added. “In fact, in some manuscripts, in line with editorial and reviewer recommendations, we explicitly state that AI was used for editing purposes.”
Öğüt said he serves “as a reviewer and editor for many journals” and makes “every effort to meet deadlines as promptly as possible. Moreover, I work with a dedicated team who support and assist me throughout these processes.”
He also worried about the discussion of his unpublished work “outside the formal peer-review process,” which he said could “constitute an ethical violation.”
But after we reached out to him, his profiles at Google Scholar, ORCID and Frontiers’ Loop all vanished.
In an email from December that Retraction Watch has seen, the reviewers laid out their concerns to John Van Horn, editor of Neuroinformatics, a Springer Nature title. Three of Öğüt’s single-authored research papers from 2025 were published in that journal, they noted, and they all seemed to follow the same template.
“The pattern of his single-authored articles and the manuscript that I reviewed was quite similar, with a similar style of title formation, redundant and/or noninformative figures, description of MATLAB functions that are not relevant for the research question at hand, and the discussion that resembles a literature review without engaging with the results of the manuscript,” the email stated. “Last but not least, the author never [shows the structures he modeled overlaid on] real MRI images, does not share the data or code, and even states that ‘No datasets were generated or analysed during the current study.’”
Under investigation
In a statement, Van Horn told us the journal had also developed “concurrent” concerns about Öğüt’s work “late last year,” and that “the manuscript submissions from this author to Neuroinformatics, past and current, have been referred to the Springer Nature Research Integrity Group for their detailed examination.”
Tim Kersjes, head of Research Integrity, Resolutions at Springer Nature, told us he could not share any details while the investigation is ongoing, “but we want to assure you that we take this matter extremely seriously.”
“We are grateful to the research community for bringing these concerns to our attention,” he said in a statement.
One of the papers in Neuroinformatics the reviewers’ email mentioned is titled “Integrated 3D Modeling and Functional Simulation of the Human Amygdala: A Novel Anatomical and Computational Analyses.” It purports to use a method called elastic shape analysis developed in 2017 by the mathematician Anuj Srivastava and his colleagues, who used the technique to model the amygdala and other brain structures in a 2022 article. Öğüt’s paper cites this work and, strangely, arrives at a specific quantitative result – 38% – that it describes as also a finding of Srivastava’s 2022 paper. Yet the number, purportedly representing the “lateral bulging” of the amygdala in people with post-traumatic stress disorder, does not appear in Srivastava’s article.
“The value of 38% was not intended to indicate an exact numerical identity with Wu et al. (2022), but rather to reflect a representative magnitude of variance typically explained by PC1 in SRNF-based elastic shape analyses,” Öğüt told us. “We acknowledge that explicitly noting this as an approximation would have avoided potential confusion.”
Srivastava, who is an incoming professor at Johns Hopkins University in Baltimore, told us that at least on first reading, Öğüt’s paper seemed “sub standard.”
“The methodological details are missing, the information is superficial, the equations are mis-typed, the procedure is not reproducible, and so on. The paper repeatedly mentions our method (’Elastic Shape Analysis’) but it does not give evidence on how they used our method,” Srivastava said.
Jennifer S. Stevens, an associate professor of psychiatry and behavioral sciences at Emory University in Atlanta and a coauthor of Srivastava’s 2022 paper, also found Öğüt’s work problematic. ”Since the methods and results are very dense and confusing,” she said, “I can’t develop a formal response at this moment, but it does seem to be concerningly vague.”
To Srivastava, Öğüt’s publication raises a larger question: “I am wondering how this paper got accepted in the first place,” he said.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Since Öğüt is an associate editor of European Journal of Medical Research.
Please admire the quality of articles published in this journal : https://pubpeer.com/publications/4D10AB3D8478B70ADEF8539F5CB17F
I’m a little surprised with this post. Retraction Watch, which I’ve been following for many years, typically deals with more proven and serious issues than this.
There’s a range of explanations, other than AI. I think the key question is how obviously low quality papers were accepted.
Similarly in terms of reviews, the answer could simply be that they are low quality, or brief. I often find the opposite issue (as an editor), people go well beyond the remit of peer review, criticizing interpretations they don’t agree with or methods they don’t like, both of which may reasonable and discussion if which is often something for post-publication discussion/commentary rather than a paper quality issue.
I think from a sub juris perspective, you should really let the journal process finish, otherwise rather than simply reporting these issues, you’re influencing the outcome.
“… academic had published 25 papers …”
Although there are these people who put their names into everything and thus “publish” +100 papers a year, the above quotation is a poor heuristic. With today’s pre-prints and lengthy review times, it often occurs that you get spikes like that in legitimate research too. For instance, I too probably crossed that threshold in 2025 but it was because 75% of the stuff was written and submitted already in 2023 and 2024.
One-author articles, using terms such as “we compared” and “in our study”. And also great information on author contributions:
Research idea: EO, Design of the study: EO,
Acquisition of data for the study: EO, Analysis of data for the study:
EO, Interpretation of data for the study: EO, Drafting and writing the
manuscript: EO, Revising it critically for important intellectual con
tent: EO, Final approval of the version to be published: EO.
This is an excellent example of how careless, uncritical use of generative AI can become a serious liability for science and for journals.
That said, while the story of the two German reviewers is compelling as a narrative, it raises an uncomfortable ethical issue: peer review is confidential. A reviewer discussing the substance of a manuscript with another person outside the editorial process is, at best, a gray area and, in many journals, explicitly prohibited. Two reviewers then describing details of an unpublished submission to a journalist goes further and risks undermining the trust that the peer-review system depends on.
Ironically, an article intended to spotlight potential misconduct ends up normalizing another kind of ethical breach: disclosure of confidential review content. If concerns are serious, the appropriate route is to report them through the journal’s formal integrity channels, not through public storytelling before an investigation concludes.
Yes, we all have to agree to review under the condition that everything remains strictly confidential. Reviewers contacting Retraction Watch is unethical (I am sure RW will disagree with this). Similarly, there were authors who published papers with fake names to 100s of journals to prove research integrity and fake journals – also unethical as per the journal and COPE guidelines. You are not supposed to submit a paper to other journals while it is under review with one journal. There is also a case of a scientist who published a paper under his/her cat’s name – this is unethical, according to me, if you follow COPE guidelines. RW, do you agree?
Concerns about paper quality or peer review should be addressed through formal editorial and publisher integrity channels, not via public commentary on unpublished or under-review work. Sharing or publicly dissecting manuscripts obtained through peer review is itself a serious breach of reviewer ethics, regardless of intent. The proper course of action is clear: contact the handling editor and initiate the formal process.
The statement that “the colleague she had consulted received an invitation to review the same paper” raises an additional concern. If a manuscript under review was shared with another researcher, this represents a second breach of reviewer ethics. COPE guidelines are explicit: reviewers must not share manuscripts or discuss them with colleagues without explicit editorial permission. As the reviewers breached confidentiality by sharing the manuscript twice, this is a serious ethical violation that warrants formal review through journal and institutional integrity channels.
High publication or review volume, by itself, is not evidence of misconduct. While the reported numbers may appear unusually high, productivity alone cannot substantiate allegations. If an author published one paper and reviewed ten in a year, would that weaken the concern related to the current paper?
Finally, the suggestion that a researcher’s productivity “did not seem humanly possible” reflects personal incredulity, not evidence. Many scholars have extremely high publication records, and it is not the role of a blog or third parties to publicly question productivity without substantiated proof. If those two reviewers have concerns, the appropriate action is to raise them with editors, as they did, and allow the established processes to run their course.
Until a formal investigation concludes, public judgment is premature. Academic integrity requires evidence, procedure, and restraint. I am not here to defend anyone but I dont see any clear evidence here.