Unverifiable researchers are a harbinger of paper mill activity. While journals have clues to identifying fake personas — lack of professional affiliation, no profile on ORCID or strings of random numbers in email addresses, to name a few — there isn’t a standard template for doing so.
The International Association of Scientific, Technical, & Medical Publishers (STM) has taken a stab at developing a framework for journals and institutions to validate researcher identity, with its Research Identity Verification Framework, released in March. The proposal suggests identifying “good” and “bad” actors based on what validated information they can provide, using passport validation when all else fails, and creating a common language in publishing circles to address authorship.
But how this will be implemented and standardized remains to be seen. We spoke with Hylke Koers, the chief information officer for STM and one of the architects of the proposal. The questions and answers have been edited for brevity and clarity.
Retraction Watch: How do the proposals in STM’s framework differ from other identity verification efforts?
Hylke Koers: Other verification efforts in the academic world tend to focus on the integrity and authenticity of the material, such as the text or images, submitted by researchers. While still important, the growing sophistication of generative AI tools makes this increasingly challenging, calling for the development of new and additional measures tied more directly to the person responsible for the production of the submitted material.
Retraction Watch: How will the identity verification system prevent or combat certain kinds of fraud, such as with peer reviewers?
Hylke Koers: For publishers that choose to implement the framework, here is how it would work: Any user interacting with the publisher’s editorial system would be prompted to complete a verification process to provide evidence of their identity. That process would be tailored and proportionate to the user’s role: author, peer reviewer, editor, guest editor, and so on.
Such a process makes impersonation or identity theft much more difficult. A concrete example of the kind of identity manipulation that this type of verification could prevent is an author suggesting a peer-reviewer by providing an email address that looks like a well-respected, independent researcher but is, in fact, controlled by the author themselves.
In addition to strengthening defenses to prevent research integrity breaches, the proposed framework could deter individuals from acting fraudulently. And, if they still do so, it improves accountability: having information about someone’s identity beyond an opaque email address makes it easier to identify and hold them accountable for their actions.
Retraction Watch: What steps would a journal take to verify an author, reviewer or editor’s identity?
Hylke Koers: The framework we are putting forward recommends offering a range of options to researchers, rather than insisting on any single method. If a user doesn’t have access to one method, for any reason, they’d be able to use another, or a combination thereof. Recommended options would include:
- Asking researchers to validate a recognized institutional email address, or to log in through their institution’s identity management system, similar to accessing subscription-based content through federated access and services like Shibboleth or SeamlessAccess.
- Another would be to have users sign in with ORCID, and use so-called “Trust Markers” stored on their ORCID record. Unlike information researchers can add themselves to their ORCID profile, Trust Markers are claims that have been added to an ORCID profile, with the user’s consent, by institutions, funders, publishers and other trusted organizations.
- Official government documents like passports and driver’s licences could be another option. While these don’t offer evidence of academic credibility, they provide stronger verification of individual identity — and a route to accountability — as they do when using many other online services.
- Where none of these options is possible, the editorial system could fall back to manual checks, some of which are also being used today. Examples include direct contact with individuals or asking colleagues to vouch for them.
Researchers could continue to use an opaque email address, such as Gmail, Yahoo, Hotmail, Erols, etc. but such an email address alone would not be enough to verify their identity.
Retraction Watch: As for manual checks, are any publishers now doing this?
Hylke Koers: Yes. Some of the members of the group who developed this framework have staff that work directly in this way, manually reviewing researchers by exploring their backgrounds, contacting them and their institutions to verify their identities. They are effectively carrying out these checks, but such a manual approach is of course time-consuming and has limited scale.
Retraction Watch: Can you define what “level of trust” means in this context? And if journals can decide what that level is, won’t it be difficult to introduce a standard?
Hylke Koers: “Trust level” is essentially a shorthand for “how confident can we be that this person is who they claim to be, and that the information they’ve provided is genuine?” It reflects the assurance that an editorial system can have that a contributor is not acting fraudulently. One practical measure of that confidence might be “do we know how to hold this person accountable if needed?”
The appropriate level of required trust is, at its core, a risk assessment that depends on the specific context of the journal, making this a judgment call that publishers or editorial systems will need to make individually. The long-term goal is to create the conditions under which meaningful consensus can emerge.
Retraction Watch: How realistic is it that researchers are going to provide publishers with pictures of their passports or other government documents?
Hylke Koers:
This challenge isn’t unique to academic publishing. Many other domains — financial services, social platforms, even dating apps — have needed to verify identities without directly handling sensitive documents themselves. The common solution is to use specialist third-party services that perform identity checks independently, and then return a confirmation of trust to the relying party, without exposing the underlying documents.
We expect that researchers would be reluctant to provide such information directly to publishers or editorial systems and, vice versa, those organisations may not want to take on the burden of managing sensitive personal data.
The framework that we are putting forward supports such federated models, where identity assurance can be handled once by a trusted third-party provider, and then re-used across multiple platforms in a standardised, privacy-preserving way.
Retraction Watch: Paper mills have created elaborate networks of fake and compromised peer reviewers. Will the ideas put forth in this framework actually do any good against them?
Hylke Koers: This is one of the most important questions we’re looking to answer. Paper mills have found it easy to exploit the current model of weak identity verification, for example by using untraceable email addresses to impersonate legitimate researchers, and using these to manipulate peer review processes. This is where we believe that the proportionate and carefully designed addition of verification could have an effect.
No framework can completely eliminate fraud — particularly if bad actors gain access to legitimate user accounts or infiltrate institutions or publication systems themselves — but by raising the cost of fraud, reducing opportunities for undetected manipulation, and making accountability more feasible, we think the situation can be improved.
Retraction Watch: Aries and ScholarOne had previously claimed to have fixed this vulnerability. Are you saying it didn’t work? Why is this still happening?
Hylke Koers: We’re not in a position to comment on specific examples, but — as the article that you mention here explains very well — narrow technical patches alone don’t eliminate the underlying problem. The fundamental point here is to do with system design, not just the implementation details.
Rather than closing individual security loopholes (such as insecure password-handling) or relying on individual editorial staff to be on the alert for “red flags,” we advocate for a shift to viewing identity and trust in a coherent and systematic way — which is what the verification framework tries to offer. While non-institutional email addresses are perfectly fine for communication, they cannot be used to make trust decisions.
Retraction Watch: What is the evidence that this framework could deter individuals from acting fraudulently?
Hylke Koers: While the proposed framework is still in development and therefore doesn’t yet have direct empirical evidence of impact, it draws on principles from other domains where identity assurance is used to deter misconduct. A key priority for us now is to gather evidence to support or reject these ideas.
What we do know is that some of the major integrity breaches that have occurred in recent years involve paper mills exploiting systems where identities were poorly verified and accountability was weak. Logically, fraud is easier in these conditions. Introducing basic forms of identity assurance – such as verified ORCID profiles, institutional affiliations confirmed by a trusted party, or known identity providers – addresses some of the known gaps and reduces the ease of operating under false or misleading identities.
Ultimately, no system can eliminate fraud entirely. We believe that this framework will make it harder to act fraudulently, will make it harder to do so without being noticed, and will make it easier to trace and respond when breaches occur.
Retraction Watch: The framework mentions using ORCID as part of the verification process. As of 2020, only about half of researchers were using ORCID. And even fewer — less than 20% — use Trust Markers. How will this work?
Hylke Koers: One of our key recommendations is to increase the addition of Trust Markers into ORCID records by publishers, institutions and funders, and thereby to make them more useful as sources of verified claims. Use of ORCID in general, and Trust Markers more specifically, is growing rapidly, and as their use becomes more effective, it’s possible that this growth will accelerate.
Retraction Watch: Do the verification methods, especially verification by institutional email, exclude independent researchers?
Hylke Koers: No. The report is clear in recognizing that legitimate researchers must never be excluded from participation in the editorial process (be it as author, reviewer, or editor). Alternative pathways should always exist to accommodate users who lack access to the specifically defined methods of verification.
Retraction Watch: Researchers might be hesitant to show things like their passports or may not have access to identification methods like this. What are the alternative pathways?
Hylke Koers: This is exactly why we are not proposing a single solution but rather a framework that explicitly acknowledges this and recommends a mixture of verification methods, calibrated by risk, and appropriate to different roles in the publishing process.
The goal is not to enforce a narrow set of ID methods, but to ensure that any method used is transparent, auditable, and proportionate to the role and associated risk. For low-risk activities self-assertion may still be acceptable. For higher-risk roles (e.g. acting as an editor or peer reviewer), stronger assurances may be justified, but those assurances don’t necessarily need to come from a passport scan.
Ultimately, the framework aims to enable inclusion with accountability, not to gatekeep based on institutional privilege. Ongoing work will include testing these alternative pathways and ensuring they are accessible, fair and practical.
Retraction Watch: To push back on this, from the point of view of an unaffiliated researcher, this could be viewed as making their lives more difficult. If the process takes longer to verify unaffiliated researchers, the actual publication could be held up. And this extra step might discourage individual researchers from pursuing publications.
Hylke Koers: That’s a very fair concern – and exactly the kind of issue the framework is meant to surface and address early, not accidentally entrench.
Indeed, if we’re not careful, the introduction of verification steps – however well-intentioned – could introduce new forms of friction, particularly for some groups like unaffiliated researchers. That’s precisely why we’ve recommended a framework rather than a fixed mechanism: so that identity verification can be implemented proportionately, with multiple equivalent routes, and designed to avoid discrimination or delay.
Retraction Watch: The framework offers a lot of guidelines but lacks an implementation strategy. How will this system create uniformity that’s helpful for both journals and researchers?
Hylke Koers: That’s deliberate. The goal isn’t to mandate a one-size-fits-all model, but to provide a shared structure that supports flexibility while enabling interoperability. As mentioned before, it is up to publishers and editorial systems to assess risk levels for their specific contexts and determine which specific verification mechanisms are an appropriate way to address that risk.
The key idea is that even partial or selective adoption — as long as it uses the common language and trust concepts defined in the framework — can still improve consistency across the system. For example, if different journals begin to signal what level of trust they require using shared terminology, and trusted identity providers begin to indicate the level of verification they offer (ideally in a machine-readable way), then those elements can start to align, even if different journals implement different policies.
That said, we do intend to work with early adopters — journals, platforms, and identity providers — to test and refine our assumptions and offer recommendations for practical integration patterns. Over time, we anticipate that these implementation pathways will be made clearer and easier to adopt, and provide an evidence-base for further iterations.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].