Can a better ID system for authors, reviewers and editors reduce fraud? STM thinks so

Unverifiable researchers are a harbinger of paper mill activity. While journals have clues to identifying fake personas — lack of professional affiliation, no profile on ORCID or strings of random numbers in email addresses, to name a few — there isn’t a standard template for doing so. 

The International Association of Scientific, Technical, & Medical Publishers (STM) has taken a stab at developing a framework for journals and institutions to validate researcher identity, with its Research Identity Verification Framework, released in March. The proposal suggests identifying “good” and “bad” actors based on what validated information they can provide, using passport validation when all else fails, and creating a common language in publishing circles to address authorship. 

But how this will be implemented and standardized remains to be seen. We spoke with Hylke Koers, the chief information officer for STM and one of the architects of the proposal. The questions and answers have been edited for brevity and clarity.

Retraction Watch: How do the proposals in STM’s framework differ from other identity verification efforts?

Hylke Koers: Other verification efforts in the academic world tend to focus on the integrity and authenticity of the material, such as the text or images, submitted by researchers. While still important, the growing sophistication of generative AI tools makes this increasingly challenging, calling for the development of new and additional measures tied more directly to the person responsible for the production of the submitted material.  

Retraction Watch: How will the identity verification system prevent or combat certain kinds of fraud, such as with peer reviewers?

Hylke Koers: For publishers that choose to implement the framework, here is how it would work: Any user interacting with the publisher’s editorial system would be prompted to complete a verification process to provide evidence of their identity. That process would be tailored and proportionate to the user’s role: author, peer reviewer, editor, guest editor, and so on.

Such a process makes impersonation or identity theft much more difficult. A concrete example of the kind of identity manipulation that this type of verification could prevent is an author suggesting a peer-reviewer by providing an email address that looks like a well-respected, independent researcher but is, in fact, controlled by the author themselves.

In addition to strengthening defenses to prevent research integrity breaches, the proposed framework could deter individuals from acting fraudulently. And, if they still do so, it improves accountability: having information about someone’s identity beyond an opaque email address makes it easier to identify and hold them accountable for their actions.

Retraction Watch: What steps would a journal take to verify an author, reviewer or editor’s identity? 

Hylke Koers: The framework we are putting forward recommends offering a range of options to researchers, rather than insisting on any single method. If a user doesn’t have access to one method, for any reason, they’d be able to use another, or a combination thereof. Recommended options would include:

  • Asking researchers to validate a recognized institutional email address, or to log in through their institution’s identity management system, similar to accessing subscription-based content through federated access and services like Shibboleth or SeamlessAccess.
  • Another would be to have users sign in with ORCID, and use so-called “Trust Markers” stored on their ORCID record. Unlike information researchers can add themselves to their ORCID profile, Trust Markers are claims that have been added to an ORCID profile, with the user’s consent, by institutions, funders, publishers and other trusted organizations. 
  • Official government documents like passports and driver’s licences could be another option. While these don’t offer evidence of academic credibility, they provide stronger verification of individual identity — and a route to accountability — as they do when using many other online services. 
  • Where none of these options is possible, the editorial system could fall back to manual checks, some of which are also being used today. Examples include direct contact with individuals or asking colleagues to vouch for them.

Researchers could continue to use an opaque email address, such as Gmail, Yahoo, Hotmail, Erols, etc. but such an email address alone would not be enough to verify their identity.

Retraction Watch: As for manual checks, are any publishers now doing this? 

Hylke Koers: Yes. Some of the members of the group who developed this framework have staff that work directly in this way, manually reviewing  researchers by exploring their backgrounds, contacting them and their institutions to verify their identities. They are effectively carrying out these checks, but such a manual approach is of course time-consuming and has limited scale.

Retraction Watch: Can you define what “level of trust” means in this context? And if journals can decide what that level is, won’t it be difficult to introduce a standard?

Hylke Koers: “Trust level” is essentially a shorthand for “how confident can we be that this person is who they claim to be, and that the information they’ve provided is genuine?” It reflects the assurance that an editorial system can have that a contributor is not acting fraudulently. One practical measure of that confidence might be “do we know how to hold this person accountable if needed?”

The appropriate level of required trust is, at its core, a risk assessment that depends on the specific context of the journal, making this a judgment call that publishers or editorial systems will need to make individually. The long-term goal is to create the conditions under which meaningful consensus can emerge.

Retraction Watch: How realistic is it that researchers are going to provide publishers with pictures of their passports or other government documents?

Hylke Koers: This challenge isn’t unique to academic publishing. Many other domains — financial services, social platforms, even dating apps — have needed to verify identities without directly handling sensitive documents themselves. The common solution is to use specialist third-party services that perform identity checks independently, and then return a confirmation of trust to the relying party, without exposing the underlying documents.

We expect that researchers would be reluctant to provide such information directly to publishers or editorial systems and, vice versa, those organisations may not want to take on the burden of managing sensitive personal data. 

The framework that we are putting forward supports such federated models, where identity assurance can be handled once by a trusted third-party provider, and then re-used across multiple platforms in a standardised, privacy-preserving way.

Retraction Watch: Paper mills have created elaborate networks of fake and compromised peer reviewers. Will the ideas put forth in this framework actually do any good against them?

Hylke Koers: This is one of the most important questions we’re looking to answer. Paper mills have found it easy to exploit the current model of weak identity verification, for example by using untraceable email addresses to impersonate legitimate researchers, and using these to manipulate peer review processes. This is where we believe that the proportionate and carefully designed addition of verification could have an effect.

No framework can completely eliminate fraud — particularly if bad actors gain access to legitimate user accounts or infiltrate institutions or publication systems themselves — but by raising the cost of fraud, reducing opportunities for undetected manipulation, and making accountability more feasible, we think the situation can be improved.

Retraction Watch: Aries and ScholarOne had previously claimed to have fixed this vulnerability. Are you saying it didn’t work? Why is this still happening?

Hylke Koers: We’re not in a position to comment on specific examples, but — as the article that you mention here explains very well — narrow technical patches alone don’t eliminate the underlying problem. The fundamental point here is to do with system design, not just the implementation details.

Rather than closing individual security loopholes (such as insecure password-handling) or relying on individual editorial staff to be on the alert for “red flags,” we advocate for a shift to viewing identity and trust in a coherent and systematic way — which is what the verification framework tries to offer. While non-institutional email addresses are perfectly fine for communication, they cannot be used to make trust decisions.

Retraction Watch: What is the evidence that this framework could deter individuals from acting fraudulently? 

Hylke Koers: While the proposed framework is still in development and therefore doesn’t yet have direct empirical evidence of impact, it draws on principles from other domains where identity assurance is used to deter misconduct. A key priority for us now is to gather evidence to support or reject these ideas.

What we do know is that some of the major integrity breaches that have occurred in recent years involve paper mills exploiting systems where identities were poorly verified and accountability was weak. Logically, fraud is easier in these conditions. Introducing basic forms of identity assurance – such as verified ORCID profiles, institutional affiliations confirmed by a trusted party, or known identity providers – addresses some of the known gaps and reduces the ease of operating under false or misleading identities.

Ultimately, no system can eliminate fraud entirely. We believe that this framework will make it harder to act fraudulently, will make it harder to do so without being noticed, and will make it easier to trace and respond when breaches occur. 

Retraction Watch: The framework mentions using ORCID as part of the verification process. As of 2020, only about half of researchers were using ORCID. And even fewer — less than 20% — use Trust Markers. How will this work?

Hylke Koers: One of our key recommendations is to increase the addition of Trust Markers into ORCID records by publishers, institutions and funders, and thereby to make them more useful as sources of verified claims. Use of ORCID in general, and Trust Markers more specifically, is growing rapidly, and as their use becomes more effective, it’s possible that this growth will accelerate.

Retraction Watch: Do the verification methods, especially verification by institutional email, exclude independent researchers?  

Hylke Koers: No. The report is clear in recognizing that legitimate researchers must never be excluded from participation in the editorial process (be it as author, reviewer, or editor). Alternative pathways should always exist to accommodate users who lack access to the specifically defined methods of verification.

Retraction Watch: Researchers might be hesitant to show things like their passports or may not have access to identification methods like this. What are the alternative pathways? 

Hylke Koers: This is exactly why we are not proposing a single solution but rather a framework that explicitly acknowledges this and recommends a mixture of verification methods, calibrated by risk, and appropriate to different roles in the publishing process. 

The goal is not to enforce a narrow set of ID methods, but to ensure that any method used is transparent, auditable, and proportionate to the role and associated risk. For low-risk activities self-assertion may still be acceptable. For higher-risk roles (e.g. acting as an editor or peer reviewer), stronger assurances may be justified, but those assurances don’t necessarily need to come from a passport scan.

Ultimately, the framework aims to enable inclusion with accountability, not to gatekeep based on institutional privilege. Ongoing work will include testing these alternative pathways and ensuring they are accessible, fair and practical.

Retraction Watch: To push back on this, from the point of view of an unaffiliated researcher, this could be viewed as making their lives more difficult. If the process takes longer to verify unaffiliated researchers, the actual publication could be held up. And this extra step might discourage individual researchers from pursuing publications.

Hylke Koers: That’s a very fair concern – and exactly the kind of issue the framework is meant to surface and address early, not accidentally entrench.

Indeed, if we’re not careful, the introduction of verification steps – however well-intentioned – could introduce new forms of friction, particularly for some groups like unaffiliated researchers. That’s precisely why we’ve recommended a framework rather than a fixed mechanism: so that identity verification can be implemented proportionately, with multiple equivalent routes, and designed to avoid discrimination or delay.

Retraction Watch: The framework offers a lot of guidelines but lacks an implementation strategy. How will this system create uniformity that’s helpful for both journals and researchers? 

Hylke Koers: That’s deliberate. The goal isn’t to mandate a one-size-fits-all model, but to provide a shared structure that supports flexibility while enabling interoperability. As mentioned before, it is up to publishers and editorial systems to assess risk levels for their specific contexts and determine which specific verification mechanisms are an appropriate way to address that risk. 

The key idea is that even partial or selective adoption — as long as it uses the common language and trust concepts defined in the framework — can still improve consistency across the system. For example, if different journals begin to signal what level of trust they require using shared terminology, and trusted identity providers begin to indicate the level of verification they offer (ideally in a machine-readable way), then those elements can start to align, even if different journals implement different policies.

That said, we do intend to work with early adopters — journals, platforms, and identity providers — to test and refine our assumptions and offer recommendations for practical integration patterns. Over time, we anticipate that these implementation pathways will be made clearer and easier to adopt, and provide an evidence-base for further iterations.


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

14 thoughts on “Can a better ID system for authors, reviewers and editors reduce fraud? STM thinks so”

  1. “From the point of view of an unaffiliated researcher, this could be viewed as making their lives more difficult.” As an independent (unaffiliated) scholar, I can attest that life in the academic publishing maze is already difficult enough, with some journals considering lack of affiliation as enough to justify desk rejection without further assessment. The problem, of course, is that journals have become increasingly gun-shy as more bad actors game the system. This proposal could be a step along the path of improvement, but all such steps are inevitably countered by escalation through new tactics on the other side.

    1. While unaffiliated researchers face difficulty for publishing their research even when they are with affiliated scientists, researchers with affiliation with BIG names (Harvard etc.) are privileged for publication no matter how valuable or reliable their research and resulting papers are. They can get their manuscript published easily. In this way, such researchers take advantage of affiliation, run a business of paper-dealership as paper-broker by getting gift authorship and likely some $$ in cash or other forms. It is not hard to find such brokers. In those papers, there are a list of authors from China, Iran, Saudi Arabia, etc. and that broker’s name appears a the last auhtor from the USA, Canada, …, likely as corresponding author with no substantial contribution to the work.

  2. Meh. While there are many reports of papers with fake authors, I haven’t seen evidence that it’s a pervasive problem. The whole point of paper milling is to fluff up the academic record of real people. Peer reviewer identities are another matter, but one of the biggest hassles for peer reviews is finding peer reviewers, and the resulting excessive delays for publishing. Increasing the friction for peer reviewers is hardly going to improve that. Unmentioned is the self-inflicted, gaping hole in publisher workflow integrity- insisting that authors provide names of suggested reviewers. Duh.

      1. My point exactly. If I had a nickel for every ORCID that’s been created in my name, I’d be a very wealthy noted dead scholar. (Which is why I like the sound of Trust Markers.)

  3. Where would the personally-identifying information (for those methods that involves verifying PII) be stored, if it’s stored even temporarily, and how would it be protected/used?

  4. As an “unaffiliated” (retired) researcher I suppose I welcome this initiative. I still receive occasional requests to referee papers, grant applications and so forth, without recompense. Were such requests to give up my time to be accompanied by further requirements to go through an onerous process of proving my identity as a preliminary to working for free, I would feel happier about turning them down.

    More seriously, though, these proposals conflate the issues of identity with those of qualification and impartiality. I don’t know quite how one proves a negative such as the absence of conflict of interest, but in any case there’s nothing about that here.

  5. “One of our key recommendations is to increase the addition of Trust Markers into ORCID records by publishers, institutions and funders, and thereby to make them more useful as sources of verified claims.”
    ORCiD is indeed the way to go, and particularly with the trust markers that academic employers nowadays put there. By now, it should already be mandatory for all peer reviewers (and guest editors), and all reviews should be recorded there by default. Big publishers should roll it quickly because the situation is getting worse day by day.
    “The proposal suggests identifying “good” and “bad” actors based on what validated information they can provide, using passport validation when all else fails, and creating a common language in publishing circles to address authorship.”
    But already having this discussion is a big no-no. No one trusts publishers that much, and there are numerous risks involved, including identity theft.

  6. From it’s wiki page.

    “The International Association of Scientific, Technical, and Medical Publishers, known for short by the initials for the last part of its name, STM, is an international trade association…”

    What about cutting trade out of it? Why do scientific institutions and research organisations pay publishers so much money? Why don’t scientific institutions, research organisations publish their work directly? The idea that publishers are unbiased gatekeepers is not just laughable, but they are not doing a wonderful job as evidenced by the existence of such outlets as Retraction Watch. Publishers are dining out , and coining it in, on an advantage they had in the past, control of printing presses and distribution (although most countries had reasonable postal services), which in the electronic age seems redundant. The name recognition, prestige, vanity value of prominent journals will no doubt continue. If only people could learn not to be snobs.

    Most scientific institutions and research organisations already publish annual reports electronically, they are most of the way there! Why does it all have to make this very expensive external loop and in the process fill the coffers of the publishers? The publishers don’t actually check any of the results. If anybody knows an example of where they do please add a comment.

    I believe that about 2 million papers are published each year. Annual reports from scientific institutions, with fleshed out versions for those interested, would cut down on the number of publications to be scanned, and eventually have to read.
    It would be informative to read in annual reports, and in even longer term reports of research output, what an institution had actually learned (what wasn’t already known),
    what it was doing differently, or replicating (a way to slow down methodological decline).

    University presses owned by universities could be repurposed away from being businesses to publish work from those universities. Harvard University Press could publish Harvard work, Oxford University Press could publish Oxford work, Cambridge University Press could publish Cambridge work. The repurposing will not come cheap, but the present system isn’t cheap either.

    What about some of the wealthier universities funding testing centers, where the scientific claims made in papers could be tested? From my reading,
    https://pubmed.ncbi.nlm.nih.gov/27703703/
    replication studies will slow down, but not stop methodological decline. Testing scientific claims may help to slow down methodological decline further.

    Some scientific societies do publish journals. The scientific society publishing could be the route for independent researchers to publish. The glaring, and sad, example, of a journal, which was one of the pioneers in scientific integrity for a dozen years, which went form scientific society ownership to business ownership, was Journal Biological Chemistry (J Biol Chem), which was sold to Elsevier! The very direction you don’t want scientific publishing to go.

  7. Thanks to everyone who’s commented so far – it’s really valuable to read these reactions. We’d like to respectfully offer a few replies.

    Firstly, the article here on Retraction Watch could only realistically cover some of the areas that this work has included. Please feel free to read the reports and provide feedback as part of our community consultation process: https://stm-assoc.org/new-digital-identity-framework-aims-to-strengthen-research-integrity-in-scholarly-publishing/.

    In terms of intentions and plans, we recognize that identity verification is a significant development whose merits, risks and pitfalls need to be carefully considered. It would be simplistic to propose a specific, narrow solution at this point, which is why we have opted to develop a framework that offers a clear solution direction while also recognizing open questions and challenges. That said, this is not just an abstract idea with nothing behind it: publishers and editorial platforms are already working towards real-world pilots and user research, with the goal of learning as much as possible about how to get this right, bearing in mind the very sorts of concerns you’ve outlined.

    With the proposed framework, we think that editorial systems can introduce verification that’s inclusive, transparent and consistent. No one wants to make life harder for researchers, peer reviewers, or editors trying to work with them. At the same time, the current system is vulnerable to exploitation by paper mills and other bad actors, and additional measures are needed to address those vulnerabilities. We suggest that a carefully designed identity verification step can help safeguard research integrity, ultimately improving trust in the academic literature, with minimal additional friction.

    A couple of responses to some points raised in the comments. First, the “good vs bad” language in the introductory text (which is separate from the report itself) presents a simplified framing. The report, by contrast, introduces a more nuanced framework—one that encourages assessing trust and risk along a continuum rather than through binary categories.

    Second, the proposed framework doesn’t require publishing systems to store more personal data. The idea would be to use existing identity providers (and third-party providers for passport validation) and open infrastructure like ORCID wherever possible, rather than setting up anything new. The report advocates for data minimisation and privacy-by-design principles.

    As a third point, it feels worth underlining that the scope of the framework extends beyond authors to include reviewers and (guest) editors, counteracting some of the more elaborate mechanisms that paper mills use to subvert the peer-review system (including the vulnerability around invited reviewers mentioned by one commenter).

    And Einstein proves the point: without verification, anyone can pretend to be anyone and say anything. In a comments section this doesn’t matter, but in scholarly publishing, it really can.

    Thanks again for the thoughtful comments. We invite you to please read the report and provide feedback to guide the further development of the framework.

Leave a Reply to Hylke KoersCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.